url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.35B
| node_id
stringlengths 18
32
| number
int64 1
6.97k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
int64 0
70
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 4
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | existe_pull_request
bool 2
classes | comentarios
sequencelengths 0
30
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/233 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/233/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/233/comments | https://api.github.com/repos/huggingface/datasets/issues/233/events | https://github.com/huggingface/datasets/issues/233 | 630,432,132 | MDU6SXNzdWU2MzA0MzIxMzI= | 233 | Fail to download c4 english corpus | {
"avatar_url": "https://avatars.githubusercontent.com/u/16605764?v=4",
"events_url": "https://api.github.com/users/donggyukimc/events{/privacy}",
"followers_url": "https://api.github.com/users/donggyukimc/followers",
"following_url": "https://api.github.com/users/donggyukimc/following{/other_user}",
"gists_url": "https://api.github.com/users/donggyukimc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/donggyukimc",
"id": 16605764,
"login": "donggyukimc",
"node_id": "MDQ6VXNlcjE2NjA1NzY0",
"organizations_url": "https://api.github.com/users/donggyukimc/orgs",
"received_events_url": "https://api.github.com/users/donggyukimc/received_events",
"repos_url": "https://api.github.com/users/donggyukimc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/donggyukimc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donggyukimc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/donggyukimc"
} | [] | closed | false | null | [] | null | 5 | "2020-06-04T01:06:38Z" | "2021-01-08T07:17:32Z" | "2020-06-08T09:16:59Z" | NONE | null | null | null | i run following code to download c4 English corpus.
```
dataset = nlp.load_dataset('c4', 'en', beam_runner='DirectRunner'
, data_dir='/mypath')
```
and i met failure as follows
```
Downloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/adam/.cache/huggingface/datasets/c4/en/2.3.0...
Traceback (most recent call last):
File "download_corpus.py", line 38, in <module>
, data_dir='/home/adam/data/corpus/en/c4')
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/load.py", line 520, in load_dataset
save_infos=save_infos,
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/builder.py", line 420, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/builder.py", line 816, in _download_and_prepare
dl_manager, verify_infos=False, pipeline=pipeline,
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/builder.py", line 457, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/datasets/c4/f545de9f63300d8d02a6795e2eb34e140c47e62a803f572ac5599e170ee66ecc/c4.py", line 175, in _split_generators
dl_manager.download_checksums(_CHECKSUMS_URL)
AttributeError: 'DownloadManager' object has no attribute 'download_checksums
```
can i get any advice? | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/233/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/233/timeline | null | completed | false | [
"Hello ! Thanks for noticing this bug, let me fix that.\r\n\r\nAlso for information, as specified in the changelog of the latest release, C4 currently needs to have a runtime for apache beam to work on. Apache beam is used to process this very big dataset and it can work on dataflow, spark, flink, apex, etc. You can find more info on beam datasets [here](https://github.com/huggingface/nlp/blob/master/docs/beam_dataset.md).\r\n\r\nOur goal in the future is to make available an already-processed version of C4 (as we do for wikipedia for example) so that users without apache beam runtimes can load it.",
"@lhoestq I am facing `IsADirectoryError` while downloading with this command.\r\nCan you pls look into it & help me.\r\nI'm using version 0.4.0 of `nlp`.\r\n\r\n```\r\ndataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n```\r\n\r\nHere's the complete stack trace.\r\n\r\n```\r\nDownloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/devops/.cache/huggingface/datasets/c4/en/2.3.0/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7...\r\n\r\n---------------------------------------------------------------------------\r\nIsADirectoryError Traceback (most recent call last)\r\n<ipython-input-11-f622e6705e03> in <module>\r\n----> 1 dataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 547 # Download and prepare data\r\n 548 builder_instance.download_and_prepare(\r\n--> 549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n 550 )\r\n 551 \r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 461 if not downloaded_from_gcs:\r\n 462 self._download_and_prepare(\r\n--> 463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 464 )\r\n 465 # Sync info\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 964 pipeline = beam_utils.BeamPipeline(runner=beam_runner, options=beam_options,)\r\n 965 super(BeamBasedBuilder, self)._download_and_prepare(\r\n--> 966 dl_manager, verify_infos=False, pipeline=pipeline,\r\n 967 ) # TODO handle verify_infos in beam datasets\r\n 968 # Run pipeline\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 516 split_dict = SplitDict(dataset_name=self.name)\r\n 517 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 518 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 519 # Checksums verification\r\n 520 if verify_infos:\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/datasets/c4/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7/c4.py in _split_generators(self, dl_manager, pipeline)\r\n 187 if self.config.realnewslike:\r\n 188 files_to_download[\"realnews_domains\"] = _REALNEWS_DOMAINS_URL\r\n--> 189 file_paths = dl_manager.download_and_extract(files_to_download)\r\n 190 \r\n 191 if self.config.webtextlike:\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in download_and_extract(self, url_or_urls)\r\n 218 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 219 \"\"\"\r\n--> 220 return self.extract(self.download(url_or_urls))\r\n 221 \r\n 222 def get_recorded_sizes_checksums(self):\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in download(self, url_or_urls)\r\n 156 lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,\r\n 157 )\r\n--> 158 self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)\r\n 159 return downloaded_path_or_paths\r\n 160 \r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in _record_sizes_checksums(self, url_or_urls, downloaded_path_or_paths)\r\n 106 flattened_downloaded_path_or_paths = flatten_nested(downloaded_path_or_paths)\r\n 107 for url, path in zip(flattened_urls_or_urls, flattened_downloaded_path_or_paths):\r\n--> 108 self._recorded_sizes_checksums[url] = get_size_checksum_dict(path)\r\n 109 \r\n 110 def download_custom(self, url_or_urls, custom_download):\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/info_utils.py in get_size_checksum_dict(path)\r\n 77 \"\"\"Compute the file size and the sha256 checksum of a file\"\"\"\r\n 78 m = sha256()\r\n---> 79 with open(path, \"rb\") as f:\r\n 80 for chunk in iter(lambda: f.read(1 << 20), b\"\"):\r\n 81 m.update(chunk)\r\n\r\nIsADirectoryError: [Errno 21] Is a directory: '/'\r\n\r\n```\r\n\r\nCan anyone please try to see what I am doing wrong or is this a bug?",
"I have the same problem as @prashant-kikani",
"Looks like a bug in the dataset script, can you open an issue ?",
"I see the same issue as @prashant-kikani. I'm using `datasets` version 1.2.0 to download C4."
] |
https://api.github.com/repos/huggingface/datasets/issues/232 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/232/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/232/comments | https://api.github.com/repos/huggingface/datasets/issues/232/events | https://github.com/huggingface/datasets/pull/232 | 630,029,568 | MDExOlB1bGxSZXF1ZXN0NDI3MjI5NDcy | 232 | Nlp cli fix endpoints | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 1 | "2020-06-03T14:10:39Z" | "2020-06-08T09:02:58Z" | "2020-06-08T09:02:57Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/232.diff",
"html_url": "https://github.com/huggingface/datasets/pull/232",
"merged_at": "2020-06-08T09:02:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/232.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/232"
} | With this PR users will be able to upload their own datasets and metrics.
As mentioned in #181, I had to use the new endpoints and revert the use of dataclasses (just in case we have changes in the API in the future).
We now distinguish commands for datasets and commands for metrics:
```bash
nlp-cli upload_dataset <path/to/dataset>
nlp-cli upload_metric <path/to/metric>
nlp-cli s3_datasets {rm, ls}
nlp-cli s3_metrics {rm, ls}
```
Does it sound good to you @julien-c @thomwolf ? | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/232/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/232/timeline | null | null | true | [
"LGTM 👍 "
] |
https://api.github.com/repos/huggingface/datasets/issues/231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/231/comments | https://api.github.com/repos/huggingface/datasets/issues/231/events | https://github.com/huggingface/datasets/pull/231 | 629,988,694 | MDExOlB1bGxSZXF1ZXN0NDI3MTk3MTcz | 231 | Add .download to MockDownloadManager | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-06-03T13:20:00Z" | "2020-06-03T14:25:56Z" | "2020-06-03T14:25:55Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/231.diff",
"html_url": "https://github.com/huggingface/datasets/pull/231",
"merged_at": "2020-06-03T14:25:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/231.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/231"
} | One method from the DownloadManager was missing and some users couldn't run the tests because of that.
@yjernite | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/231/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/231/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/230 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/230/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/230/comments | https://api.github.com/repos/huggingface/datasets/issues/230/events | https://github.com/huggingface/datasets/pull/230 | 629,983,684 | MDExOlB1bGxSZXF1ZXN0NDI3MTkzMTQ0 | 230 | Don't force to install apache beam for wikipedia dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-06-03T13:13:07Z" | "2020-06-03T14:34:09Z" | "2020-06-03T14:34:07Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/230.diff",
"html_url": "https://github.com/huggingface/datasets/pull/230",
"merged_at": "2020-06-03T14:34:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/230.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/230"
} | As pointed out in #227, we shouldn't force users to install apache beam if the processed dataset can be downloaded. I moved the imports of some datasets to avoid this problem | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/230/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/230/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/229 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/229/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/229/comments | https://api.github.com/repos/huggingface/datasets/issues/229/events | https://github.com/huggingface/datasets/pull/229 | 629,956,490 | MDExOlB1bGxSZXF1ZXN0NDI3MTcxMzc5 | 229 | Rename dataset_infos.json to dataset_info.json | {
"avatar_url": "https://avatars.githubusercontent.com/u/11817160?v=4",
"events_url": "https://api.github.com/users/aswin-giridhar/events{/privacy}",
"followers_url": "https://api.github.com/users/aswin-giridhar/followers",
"following_url": "https://api.github.com/users/aswin-giridhar/following{/other_user}",
"gists_url": "https://api.github.com/users/aswin-giridhar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aswin-giridhar",
"id": 11817160,
"login": "aswin-giridhar",
"node_id": "MDQ6VXNlcjExODE3MTYw",
"organizations_url": "https://api.github.com/users/aswin-giridhar/orgs",
"received_events_url": "https://api.github.com/users/aswin-giridhar/received_events",
"repos_url": "https://api.github.com/users/aswin-giridhar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aswin-giridhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aswin-giridhar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aswin-giridhar"
} | [] | closed | false | null | [] | null | 1 | "2020-06-03T12:31:44Z" | "2020-06-03T12:52:54Z" | "2020-06-03T12:48:33Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/229.diff",
"html_url": "https://github.com/huggingface/datasets/pull/229",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/229.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/229"
} | As the file required for the viewing in the live nlp viewer is named as dataset_info.json | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/229/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/229/timeline | null | null | true | [
"\r\nThis was actually the right name. `dataset_infos.json` is used to have the infos of all the dataset configurations.\r\n\r\nOn the other hand `dataset_info.json` (without 's') is a cache file with the info of one specific configuration.\r\n\r\nTo fix #228, we probably just have to clear and reload the nlp-viewer cache."
] |
https://api.github.com/repos/huggingface/datasets/issues/228 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/228/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/228/comments | https://api.github.com/repos/huggingface/datasets/issues/228/events | https://github.com/huggingface/datasets/issues/228 | 629,952,402 | MDU6SXNzdWU2Mjk5NTI0MDI= | 228 | Not able to access the XNLI dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/11817160?v=4",
"events_url": "https://api.github.com/users/aswin-giridhar/events{/privacy}",
"followers_url": "https://api.github.com/users/aswin-giridhar/followers",
"following_url": "https://api.github.com/users/aswin-giridhar/following{/other_user}",
"gists_url": "https://api.github.com/users/aswin-giridhar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aswin-giridhar",
"id": 11817160,
"login": "aswin-giridhar",
"node_id": "MDQ6VXNlcjExODE3MTYw",
"organizations_url": "https://api.github.com/users/aswin-giridhar/orgs",
"received_events_url": "https://api.github.com/users/aswin-giridhar/received_events",
"repos_url": "https://api.github.com/users/aswin-giridhar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aswin-giridhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aswin-giridhar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aswin-giridhar"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/srush",
"id": 35882,
"login": "srush",
"node_id": "MDQ6VXNlcjM1ODgy",
"organizations_url": "https://api.github.com/users/srush/orgs",
"received_events_url": "https://api.github.com/users/srush/received_events",
"repos_url": "https://api.github.com/users/srush/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"type": "User",
"url": "https://api.github.com/users/srush"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/srush",
"id": 35882,
"login": "srush",
"node_id": "MDQ6VXNlcjM1ODgy",
"organizations_url": "https://api.github.com/users/srush/orgs",
"received_events_url": "https://api.github.com/users/srush/received_events",
"repos_url": "https://api.github.com/users/srush/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"type": "User",
"url": "https://api.github.com/users/srush"
}
] | null | 4 | "2020-06-03T12:25:14Z" | "2020-07-17T17:44:22Z" | "2020-07-17T17:44:22Z" | NONE | null | null | null | When I try to access the XNLI dataset, I get the following error. The option of plain_text get selected automatically and then I get the following error.
```
FileNotFoundError: [Errno 2] No such file or directory: '/home/sasha/.cache/huggingface/datasets/xnli/plain_text/1.0.0/dataset_info.json'
Traceback:
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp_viewer/run.py", line 86, in <module>
dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None)
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 591, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 575, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp_viewer/run.py", line 72, in get
builder_instance = builder_cls(name=conf)
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/builder.py", line 610, in __init__
super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/builder.py", line 152, in __init__
self.info = DatasetInfo.from_directory(self._cache_dir)
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/info.py", line 157, in from_directory
with open(os.path.join(dataset_info_dir, DATASET_INFO_FILENAME), "r") as f:
```
Is it possible to see if the dataset_info.json is correctly placed? | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/228/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/228/timeline | null | completed | false | [
"Added pull request to change the name of the file from dataset_infos.json to dataset_info.json",
"Thanks for reporting this bug !\r\nAs it seems to be just a cache problem, I closed your PR.\r\nI think we might just need to clear and reload the `xnli` cache @srush ? ",
"Update: The dataset_info.json error is gone, but we have a new one instead:\r\n```\r\nConnectionError: Couldn't reach https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip\r\n```\r\nI am not able to reproduce on my side unfortunately. Any idea @srush ?",
"xnli is now properly shown in the viewer.\r\nClosing this one."
] |
https://api.github.com/repos/huggingface/datasets/issues/227 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/227/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/227/comments | https://api.github.com/repos/huggingface/datasets/issues/227/events | https://github.com/huggingface/datasets/issues/227 | 629,845,704 | MDU6SXNzdWU2Mjk4NDU3MDQ= | 227 | Should we still have to force to install apache_beam to download wikipedia ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 3 | "2020-06-03T09:33:20Z" | "2020-06-03T15:25:41Z" | "2020-06-03T15:25:41Z" | CONTRIBUTOR | null | null | null | Hi, first thanks to @lhoestq 's revolutionary work, I successfully downloaded processed wikipedia according to the doc. 😍😍😍
But at the first try, it tell me to install `apache_beam` and `mwparserfromhell`, which I thought wouldn't be used according to #204 , it was kind of confusing me at that time.
Maybe we should not force users to install these ? Or we just add them to`nlp`'s dependency ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/227/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/227/timeline | null | completed | false | [
"Thanks for your message 😊 \r\nIndeed users shouldn't have to install those dependencies",
"Got it, feel free to close this issue when you think it’s resolved.",
"It should be good now :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/226 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/226/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/226/comments | https://api.github.com/repos/huggingface/datasets/issues/226/events | https://github.com/huggingface/datasets/pull/226 | 628,344,520 | MDExOlB1bGxSZXF1ZXN0NDI1OTA0MjEz | 226 | add BlendedSkillTalk dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 1 | "2020-06-01T10:54:45Z" | "2020-06-03T14:37:23Z" | "2020-06-03T14:37:22Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/226.diff",
"html_url": "https://github.com/huggingface/datasets/pull/226",
"merged_at": "2020-06-03T14:37:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/226.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/226"
} | This PR add the BlendedSkillTalk dataset, which is used to fine tune the blenderbot. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/226/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/226/timeline | null | null | true | [
"Awesome :D"
] |
https://api.github.com/repos/huggingface/datasets/issues/225 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/225/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/225/comments | https://api.github.com/repos/huggingface/datasets/issues/225/events | https://github.com/huggingface/datasets/issues/225 | 628,083,366 | MDU6SXNzdWU2MjgwODMzNjY= | 225 | [ROUGE] Different scores with `files2rouge` | {
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/astariul",
"id": 43774355,
"login": "astariul",
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"repos_url": "https://api.github.com/users/astariul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/astariul"
} | [
{
"color": "d722e8",
"default": false,
"description": "Discussions on the metrics",
"id": 2067400959,
"name": "Metric discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwOTU5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Metric%20discussion"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
}
] | null | 3 | "2020-06-01T00:50:36Z" | "2020-06-03T15:27:18Z" | "2020-06-03T15:27:18Z" | NONE | null | null | null | It seems that the ROUGE score of `nlp` is lower than the one of `files2rouge`.
Here is a self-contained notebook to reproduce both scores : https://colab.research.google.com/drive/14EyAXValB6UzKY9x4rs_T3pyL7alpw_F?usp=sharing
---
`nlp` : (Only mid F-scores)
>rouge1 0.33508031962733364
rouge2 0.14574333776191592
rougeL 0.2321187823256159
`files2rouge` :
>Running ROUGE...
===========================
1 ROUGE-1 Average_R: 0.48873 (95%-conf.int. 0.41192 - 0.56339)
1 ROUGE-1 Average_P: 0.29010 (95%-conf.int. 0.23605 - 0.34445)
1 ROUGE-1 Average_F: 0.34761 (95%-conf.int. 0.29479 - 0.39871)
===========================
1 ROUGE-2 Average_R: 0.20280 (95%-conf.int. 0.14969 - 0.26244)
1 ROUGE-2 Average_P: 0.12772 (95%-conf.int. 0.08603 - 0.17752)
1 ROUGE-2 Average_F: 0.14798 (95%-conf.int. 0.10517 - 0.19240)
===========================
1 ROUGE-L Average_R: 0.32960 (95%-conf.int. 0.26501 - 0.39676)
1 ROUGE-L Average_P: 0.19880 (95%-conf.int. 0.15257 - 0.25136)
1 ROUGE-L Average_F: 0.23619 (95%-conf.int. 0.19073 - 0.28663)
---
When using longer predictions/gold, the difference is bigger.
**How can I reproduce same score as `files2rouge` ?**
@lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/225/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/225/timeline | null | completed | false | [
"@Colanim unfortunately there are different implementations of the ROUGE metric floating around online which yield different results, and we had to chose one for the package :) We ended up including the one from the google-research repository, which does minimal post-processing before computing the P/R/F scores. If I recall correctly, files2rouge relies on the Perl, script, which among other things normalizes all numbers to a special token: in the case you presented, this should account for a good chunk of the difference.\r\n\r\nWe may end up adding in more versions of the metric, but probably not for a while (@lhoestq correct me if I'm wrong). However, feel free to take a stab at adding it in yourself and submitting a PR if you're interested!",
"Thank you for your kind answer.\r\n\r\nAs a side question : Isn't it better to have a package that normalize more ?\r\n\r\nI understand to idea of having a package that does minimal post-processing for transparency.\r\n\r\nBut it means that people using different architecture (with different tokenizers for example) will have difference in ROUGE scores even if their predictions are actually similar. \r\nThe goal of `nlp` is to have _one package to rule them all_, right ?\r\n\r\nI will look into it but I'm not sure I have the required skill for this ^^ ",
"You're right, there's a pretty interesting trade-off here between robustness and sensitivity :) The flip side of your argument is that we also still want the metric to be sensitive to model mistakes. How we think about number normalization for example has evolved a fair bit since the Perl script was written: at the time, ROUGE was used mostly to evaluate short-medium text summarization systems, where there were only a few numbers in the input and it was assumed that the most popular methods in use at the time would get those right. However, as your example showcases, that assumption does not hold any more, and we do want to be able to penalize a model that generates a wrong numerical value.\r\n\r\nAlso, we think that abstracting away tokenization differences is the role of the model/tokenizer: if you use the 🤗Tokenizers library for example, it will handle that for you ;)\r\n\r\nFinally, there is a lot of active research on developing model-powered metrics that are both more sensitive and more robust than ROUGE. Check out for example BERTscore, which is implemented in this library!"
] |
https://api.github.com/repos/huggingface/datasets/issues/224 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/224/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/224/comments | https://api.github.com/repos/huggingface/datasets/issues/224/events | https://github.com/huggingface/datasets/issues/224 | 627,791,693 | MDU6SXNzdWU2Mjc3OTE2OTM= | 224 | [Feature Request/Help] BLEURT model -> PyTorch | {
"avatar_url": "https://avatars.githubusercontent.com/u/6889910?v=4",
"events_url": "https://api.github.com/users/adamwlev/events{/privacy}",
"followers_url": "https://api.github.com/users/adamwlev/followers",
"following_url": "https://api.github.com/users/adamwlev/following{/other_user}",
"gists_url": "https://api.github.com/users/adamwlev/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adamwlev",
"id": 6889910,
"login": "adamwlev",
"node_id": "MDQ6VXNlcjY4ODk5MTA=",
"organizations_url": "https://api.github.com/users/adamwlev/orgs",
"received_events_url": "https://api.github.com/users/adamwlev/received_events",
"repos_url": "https://api.github.com/users/adamwlev/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adamwlev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adamwlev/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adamwlev"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
}
] | null | 6 | "2020-05-30T18:30:40Z" | "2023-08-26T17:38:48Z" | "2021-01-04T09:53:32Z" | NONE | null | null | null | Hi, I am interested in porting google research's new BLEURT learned metric to PyTorch (because I wish to do something experimental with language generation and backpropping through BLEURT). I noticed that you guys don't have it yet so I am partly just asking if you plan to add it (@thomwolf said you want to do so on Twitter).
I had a go of just like manually using the checkpoint that they publish which includes the weights. It seems like the architecture is exactly aligned with the out-of-the-box BertModel in transformers just with a single linear layer on top of the CLS embedding. I loaded all the weights to the PyTorch model but I am not able to get the same numbers as the BLEURT package's python api. Here is my colab notebook where I tried https://colab.research.google.com/drive/1Bfced531EvQP_CpFvxwxNl25Pj6ptylY?usp=sharing . If you have any pointers on what might be going wrong that would be much appreciated!
Thank you muchly! | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/224/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/224/timeline | null | completed | false | [
"Is there any update on this? \r\n\r\nThanks!",
"Hitting this error when using bleurt with PyTorch ...\r\n\r\n```\r\nUnrecognizedFlagError: Unknown command line flag 'f'\r\n```\r\n... and I'm assuming because it was built for TF specifically. Is there a way to use this metric in PyTorch?",
"We currently provide a wrapper on the TensorFlow implementation: https://huggingface.co/metrics/bleurt\r\n\r\nWe have long term plans to better handle model-based metrics, but they probably won't be implemented right away\r\n\r\n@adamwlev it would still be cool to add the BLEURT checkpoints to the transformers repo if you're interested, but that would best be discussed there :) \r\n\r\nclosing for now",
"Hi there. We ran into the same problem this year (converting BLEURT to PyTorch) and thanks to @adamwlev found his colab notebook which didn't work but served as a good starting point. Finally, we **made it work** by doing just two simple conceptual fixes: \r\n\r\n1. Transposing 'kernel' layers instead of 'dense' ones when copying params from the original model;\r\n2. Taking pooler_output as a cls_state in forward function of the BleurtModel class.\r\n\r\nPlus few minor syntactical fixes for the outdated parts. The result is still not exactly the same, but is very close to the expected one (1.0483 vs 1.0474).\r\n\r\nFind the fixed version here (fixes are commented): https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing \r\n",
"I created a new model based on `transformers` that can load every BLEURT checkpoints released so far. https://github.com/lucadiliello/bleurt-pytorch",
"@LoraIpsum Thanks for sharing your work here. However, I'm unable to reproduce the results. That's strange because you are. FYI, I am trying to convert a finetuned BLEURT to PyTorch. Any suggestions on how I can reproduce results?"
] |
https://api.github.com/repos/huggingface/datasets/issues/223 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/223/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/223/comments | https://api.github.com/repos/huggingface/datasets/issues/223/events | https://github.com/huggingface/datasets/issues/223 | 627,683,386 | MDU6SXNzdWU2Mjc2ODMzODY= | 223 | [Feature request] Add FLUE dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/58078086?v=4",
"events_url": "https://api.github.com/users/lbourdois/events{/privacy}",
"followers_url": "https://api.github.com/users/lbourdois/followers",
"following_url": "https://api.github.com/users/lbourdois/following{/other_user}",
"gists_url": "https://api.github.com/users/lbourdois/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lbourdois",
"id": 58078086,
"login": "lbourdois",
"node_id": "MDQ6VXNlcjU4MDc4MDg2",
"organizations_url": "https://api.github.com/users/lbourdois/orgs",
"received_events_url": "https://api.github.com/users/lbourdois/received_events",
"repos_url": "https://api.github.com/users/lbourdois/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lbourdois/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lbourdois/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lbourdois"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 3 | "2020-05-30T08:52:15Z" | "2020-12-03T13:39:33Z" | "2020-12-03T13:39:33Z" | NONE | null | null | null | Hi,
I think it would be interesting to add the FLUE dataset for francophones or anyone wishing to work on French.
In other requests, I read that you are already working on some datasets, and I was wondering if FLUE was planned.
If it is not the case, I can provide each of the cleaned FLUE datasets (in the form of a directly exploitable dataset rather than in the original xml formats which require additional processing, with the French part for cases where the dataset is based on a multilingual dataframe, etc.). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/223/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/223/timeline | null | completed | false | [
"Hi @lbourdois, yes please share it with us",
"@mariamabarham \r\nI put all the datasets on this drive: https://1drv.ms/u/s!Ao2Rcpiny7RFinDypq7w-LbXcsx9?e=iVsEDh\r\n\r\n\r\nSome information : \r\n• For FLUE, the quote used is\r\n\r\n> @misc{le2019flaubert,\r\n> title={FlauBERT: Unsupervised Language Model Pre-training for French},\r\n> author={Hang Le and Loïc Vial and Jibril Frej and Vincent Segonne and Maximin Coavoux and Benjamin Lecouteux and Alexandre Allauzen and Benoît Crabbé and Laurent Besacier and Didier Schwab},\r\n> year={2019},\r\n> eprint={1912.05372},\r\n> archivePrefix={arXiv},\r\n> primaryClass={cs.CL}\r\n> }\r\n\r\n• The Github repo of FLUE is avaible here : https://github.com/getalp/Flaubert/tree/master/flue\r\n\r\n\r\n\r\nInformation related to the different tasks of FLUE : \r\n\r\n**1. Classification**\r\nThree dataframes are available: \r\n- Book\r\n- DVD\r\n- Music\r\nFor each of these dataframes is available a set of training and test data, and a third one containing unlabelled data.\r\n\r\nCitation : \r\n>@dataset{prettenhofer_peter_2010_3251672,\r\n author = {Prettenhofer, Peter and\r\n Stein, Benno},\r\n title = {{Webis Cross-Lingual Sentiment Dataset 2010 (Webis- \r\n CLS-10)}},\r\n month = jul,\r\n year = 2010,\r\n publisher = {Zenodo},\r\n doi = {10.5281/zenodo.3251672},\r\n url = {https://doi.org/10.5281/zenodo.3251672}\r\n}\r\n\r\n\r\n**2. Paraphrasing** \r\nFrench part of the PAWS-X dataset (https://github.com/google-research-datasets/paws).\r\nThree dataframes are available: \r\n- train\r\n- dev\r\n- test \r\n\r\nCitation : \r\n> @InProceedings{pawsx2019emnlp,\r\n> title = {{PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification}},\r\n> author = {Yang, Yinfei and Zhang, Yuan and Tar, Chris and Baldridge, Jason},\r\n> booktitle = {Proc. of EMNLP},\r\n> year = {2019}\r\n> }\r\n\r\n\r\n\r\n**3. Natural Language Inference**\r\nFrench part of the XNLI dataset (https://github.com/facebookresearch/XNLI).\r\nThree dataframes are available: \r\n- train\r\n- dev\r\n- test \r\n\r\nFor the dev and test datasets, extra columns compared to the train dataset were available so I left them in the dataframe (I didn't know if these columns could be useful for other tasks or not). \r\nIn the context of the FLUE benchmark, only the columns gold_label, sentence1 and sentence2 are useful.\r\n\r\n\r\nCitation : \r\n\r\n> @InProceedings{conneau2018xnli,\r\n> author = \"Conneau, Alexis\r\n> and Rinott, Ruty\r\n> and Lample, Guillaume\r\n> and Williams, Adina\r\n> and Bowman, Samuel R.\r\n> and Schwenk, Holger\r\n> and Stoyanov, Veselin\",\r\n> title = \"XNLI: Evaluating Cross-lingual Sentence Representations\",\r\n> booktitle = \"Proceedings of the 2018 Conference on Empirical Methods\r\n> in Natural Language Processing\",\r\n> year = \"2018\",\r\n> publisher = \"Association for Computational Linguistics\",\r\n> location = \"Brussels, Belgium\",\r\n\r\n\r\n**4. Parsing**\r\nThe dataset used by the FLUE authors for this task is not freely available.\r\nUsers of your library will therefore not be able to access it.\r\nNevertheless, I think maybe it is useful to add a link to the site where to request this dataframe: http://ftb.linguist.univ-paris-diderot.fr/telecharger.php?langue=en \r\n(personally it was sent to me less than 48 hours after I requested it).\r\n\r\n\r\n**5. Word Sense Disambiguation Tasks**\r\n5.1 Verb Sense Disambiguation\r\n\r\nTwo dataframes are available: train and test\r\nFor both dataframes, 4 columns are available: document, sentence, lemma and word.\r\nI created the document column thinking that there were several documents in the dataset but afterwards it turns out that there were not: several sentences but only one document. It's up to you to keep it or not when importing these two dataframes.\r\n\r\nThe sentence column is used to determine to which sentence the word in the word column belongs. It is in the form of a dictionary {'id': 'd000.s001', 'idx': '1'}. I thought for a while to keep only the idx because the id doesn't matter any more information. Nevertheless for the test dataset, the dictionary has an extra value indicating the source of the sentence. I don't know if it's useful or not, that's why I left the dictionary just in case. The user is free to do what he wants with it.\r\n\r\nCitation : \r\n\r\n> Segonne, V., Candito, M., and Crabb ́e, B. (2019). Usingwiktionary as a resource for wsd: the case of frenchverbs. InProceedings of the 13th International Confer-ence on Computational Semantics-Long Papers, pages259–270\r\n\r\n5.2 Noun Sense Disambiguation\r\nTwo dataframes are available: 2 train and 1 test\r\n\r\nI confess I didn't fully understand the procedure for this task.\r\n\r\nCitation : \r\n\r\n> @dataset{loic_vial_2019_3549806,\r\n> author = {Loïc Vial},\r\n> title = {{French Word Sense Disambiguation with Princeton \r\n> WordNet Identifiers}},\r\n> month = nov,\r\n> year = 2019,\r\n> publisher = {Zenodo},\r\n> version = {1.0},\r\n> doi = {10.5281/zenodo.3549806},\r\n> url = {https://doi.org/10.5281/zenodo.3549806}\r\n> }\r\n\r\nFinally, additional information about FLUE is available in the FlauBERT publication : \r\nhttps://arxiv.org/abs/1912.05372 (p. 4).\r\n\r\n\r\nHoping to have provided you with everything you need to add this benchmark :) \r\n",
"https://github.com/huggingface/datasets/pull/943"
] |
https://api.github.com/repos/huggingface/datasets/issues/222 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/222/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/222/comments | https://api.github.com/repos/huggingface/datasets/issues/222/events | https://github.com/huggingface/datasets/issues/222 | 627,586,690 | MDU6SXNzdWU2Mjc1ODY2OTA= | 222 | Colab Notebook breaks when downloading the squad dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/338917?v=4",
"events_url": "https://api.github.com/users/carlos-aguayo/events{/privacy}",
"followers_url": "https://api.github.com/users/carlos-aguayo/followers",
"following_url": "https://api.github.com/users/carlos-aguayo/following{/other_user}",
"gists_url": "https://api.github.com/users/carlos-aguayo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/carlos-aguayo",
"id": 338917,
"login": "carlos-aguayo",
"node_id": "MDQ6VXNlcjMzODkxNw==",
"organizations_url": "https://api.github.com/users/carlos-aguayo/orgs",
"received_events_url": "https://api.github.com/users/carlos-aguayo/received_events",
"repos_url": "https://api.github.com/users/carlos-aguayo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/carlos-aguayo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/carlos-aguayo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/carlos-aguayo"
} | [] | closed | false | null | [] | null | 6 | "2020-05-29T22:55:59Z" | "2020-06-04T00:21:05Z" | "2020-06-04T00:21:05Z" | NONE | null | null | null | When I run the notebook in Colab
https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb
breaks when running this cell:
![image](https://user-images.githubusercontent.com/338917/83311709-ffd1b800-a1dd-11ea-8394-3a87df0d7f8b.png)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/222/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/222/timeline | null | completed | false | [
"The notebook forces version 0.1.0. If I use the latest, things work, I'll run the whole notebook and create a PR.\r\n\r\nBut in the meantime, this issue gets fixed by changing:\r\n`!pip install nlp==0.1.0`\r\nto\r\n`!pip install nlp`",
"It still breaks very near the end\r\n\r\n![image](https://user-images.githubusercontent.com/338917/83312264-aa96a600-a1df-11ea-987f-2f4a0474247e.png)\r\n",
"When you install `nlp` for the first time on a Colab runtime, it updates the `pyarrow` library that was already on colab. This update shows this message on colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\nYou just have to restart the runtime and it should be fine.\r\nIf you don't restart, then it breaks like in your first message ",
"Thanks for reporting the second one ! We'll update the notebook to fix this one :)",
"This trick from @thomwolf seems to be the most reliable solution to fix this colab notebook issue:\r\n\r\n```python\r\n# install nlp\r\n!pip install -qq nlp==0.2.0\r\n\r\n# Make sure that we have a recent version of pyarrow in the session before we continue - otherwise reboot Colab to activate it\r\nimport pyarrow\r\nif int(pyarrow.__version__.split('.')[1]) < 16:\r\n import os\r\n os.kill(os.getpid(), 9)\r\n```",
"The second part got fixed here: 2cbc656d6fc4b18ce57eac070baec05b31180d39\r\n\r\nThanks! I'm then closing this issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/221/comments | https://api.github.com/repos/huggingface/datasets/issues/221/events | https://github.com/huggingface/datasets/pull/221 | 627,300,648 | MDExOlB1bGxSZXF1ZXN0NDI1MTI5OTc0 | 221 | Fix tests/test_dataset_common.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/13635495?v=4",
"events_url": "https://api.github.com/users/tayciryahmed/events{/privacy}",
"followers_url": "https://api.github.com/users/tayciryahmed/followers",
"following_url": "https://api.github.com/users/tayciryahmed/following{/other_user}",
"gists_url": "https://api.github.com/users/tayciryahmed/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tayciryahmed",
"id": 13635495,
"login": "tayciryahmed",
"node_id": "MDQ6VXNlcjEzNjM1NDk1",
"organizations_url": "https://api.github.com/users/tayciryahmed/orgs",
"received_events_url": "https://api.github.com/users/tayciryahmed/received_events",
"repos_url": "https://api.github.com/users/tayciryahmed/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tayciryahmed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tayciryahmed/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tayciryahmed"
} | [] | closed | false | null | [] | null | 1 | "2020-05-29T14:12:15Z" | "2020-06-01T12:20:42Z" | "2020-05-29T15:02:23Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/221.diff",
"html_url": "https://github.com/huggingface/datasets/pull/221",
"merged_at": "2020-05-29T15:02:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/221.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/221"
} | When I run the command `RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_arcd` while working on #220. I get the error ` unexpected keyword argument "'download_and_prepare_kwargs'"` at the level of `load_dataset`. Indeed, this [function](https://github.com/huggingface/nlp/blob/master/src/nlp/load.py#L441) no longer has the argument `download_and_prepare_kwargs` but rather `download_config`. So here I change the tests accordingly. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/221/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/221/timeline | null | null | true | [
"Thanks ! Good catch :)\r\n\r\nTo fix the CI you can do:\r\n1 - rebase from master\r\n2 - then run `make style` as specified in [CONTRIBUTING.md](https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md) ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/220 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/220/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/220/comments | https://api.github.com/repos/huggingface/datasets/issues/220/events | https://github.com/huggingface/datasets/pull/220 | 627,280,683 | MDExOlB1bGxSZXF1ZXN0NDI1MTEzMzEy | 220 | dataset_arcd | {
"avatar_url": "https://avatars.githubusercontent.com/u/13635495?v=4",
"events_url": "https://api.github.com/users/tayciryahmed/events{/privacy}",
"followers_url": "https://api.github.com/users/tayciryahmed/followers",
"following_url": "https://api.github.com/users/tayciryahmed/following{/other_user}",
"gists_url": "https://api.github.com/users/tayciryahmed/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tayciryahmed",
"id": 13635495,
"login": "tayciryahmed",
"node_id": "MDQ6VXNlcjEzNjM1NDk1",
"organizations_url": "https://api.github.com/users/tayciryahmed/orgs",
"received_events_url": "https://api.github.com/users/tayciryahmed/received_events",
"repos_url": "https://api.github.com/users/tayciryahmed/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tayciryahmed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tayciryahmed/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tayciryahmed"
} | [] | closed | false | null | [] | null | 2 | "2020-05-29T13:46:50Z" | "2020-05-29T14:58:40Z" | "2020-05-29T14:57:21Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/220.diff",
"html_url": "https://github.com/huggingface/datasets/pull/220",
"merged_at": "2020-05-29T14:57:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/220.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/220"
} | Added Arabic Reading Comprehension Dataset (ARCD): https://arxiv.org/abs/1906.05394 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/220/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/220/timeline | null | null | true | [
"you can rebase from master to fix the CI error :)",
"Awesome !"
] |
https://api.github.com/repos/huggingface/datasets/issues/219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/219/comments | https://api.github.com/repos/huggingface/datasets/issues/219/events | https://github.com/huggingface/datasets/pull/219 | 627,235,893 | MDExOlB1bGxSZXF1ZXN0NDI1MDc2NjQx | 219 | force mwparserfromhell as third party | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-05-29T12:33:17Z" | "2020-05-29T13:30:13Z" | "2020-05-29T13:30:12Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/219.diff",
"html_url": "https://github.com/huggingface/datasets/pull/219",
"merged_at": "2020-05-29T13:30:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/219.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/219"
} | This should fix your env because you had `mwparserfromhell ` as a first party for `isort` @patrickvonplaten | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/219/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/219/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/218 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/218/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/218/comments | https://api.github.com/repos/huggingface/datasets/issues/218/events | https://github.com/huggingface/datasets/pull/218 | 627,173,407 | MDExOlB1bGxSZXF1ZXN0NDI1MDI2NzEz | 218 | Add Natual Questions and C4 scripts | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-05-29T10:40:30Z" | "2020-05-29T12:31:01Z" | "2020-05-29T12:31:00Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/218.diff",
"html_url": "https://github.com/huggingface/datasets/pull/218",
"merged_at": "2020-05-29T12:31:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/218.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/218"
} | Scripts are ready !
However they are not processed nor directly available from gcp yet. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/218/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/218/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/217 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/217/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/217/comments | https://api.github.com/repos/huggingface/datasets/issues/217/events | https://github.com/huggingface/datasets/issues/217 | 627,128,403 | MDU6SXNzdWU2MjcxMjg0MDM= | 217 | Multi-task dataset mixing | {
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | open | false | null | [] | null | 26 | "2020-05-29T09:22:26Z" | "2022-10-22T00:45:50Z" | null | CONTRIBUTOR | null | null | null | It seems like many of the best performing models on the GLUE benchmark make some use of multitask learning (simultaneous training on multiple tasks).
The [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) highlights multiple ways of mixing the tasks together during finetuning:
- **Examples-proportional mixing** - sample from tasks proportionally to their dataset size
- **Equal mixing** - sample uniformly from each task
- **Temperature-scaled mixing** - The generalized approach used by multilingual BERT which uses a temperature T, where the mixing rate of each task is raised to the power 1/T and renormalized. When T=1 this is equivalent to equal mixing, and becomes closer to equal mixing with increasing T.
Following this discussion https://github.com/huggingface/transformers/issues/4340 in [transformers](https://github.com/huggingface/transformers), @enzoampil suggested that the `nlp` library might be a better place for this functionality.
Some method for combining datasets could be implemented ,e.g.
```
dataset = nlp.load_multitask(['squad','imdb','cnn_dm'], temperature=2.0, ...)
```
We would need a few additions:
- Method of identifying the tasks - how can we support adding a string to each task as an identifier: e.g. 'summarisation: '?
- Method of combining the metrics - a standard approach is to use the specific metric for each task and add them together for a combined score.
It would be great to support common use cases such as pretraining on the GLUE benchmark before fine-tuning on each GLUE task in turn.
I'm willing to write bits/most of this I just need some guidance on the interface and other library details so I can integrate it properly.
| {
"+1": 12,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 12,
"url": "https://api.github.com/repos/huggingface/datasets/issues/217/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/217/timeline | null | null | false | [
"I like this feature! I think the first question we should decide on is how to convert all datasets into the same format. In T5, the authors decided to format every dataset into a text-to-text format. If the dataset had \"multiple\" inputs like MNLI, the inputs were concatenated. So in MNLI the input:\r\n\r\n> - **Hypothesis**: The St. Louis Cardinals have always won.\r\n> \r\n> - **Premise**: yeah well losing is i mean i’m i’m originally from Saint Louis and Saint Louis Cardinals when they were there were uh a mostly a losing team but \r\n\r\nwas flattened to a single input:\r\n\r\n> mnli hypothesis: The St. Louis Cardinals have always won. premise:\r\n> yeah well losing is i mean i’m i’m originally from Saint Louis and Saint Louis Cardinals\r\n> when they were there were uh a mostly a losing team but.\r\n\r\nThis flattening is actually a very simple operation in `nlp` already. You would just need to do the following:\r\n\r\n```python \r\ndef flatten_inputs(example):\r\n return {\"input\": \"mnli hypothesis: \" + example['hypothesis'] + \" premise: \" + example['premise']}\r\n\r\nt5_ready_mnli_ds = mnli_ds.map(flatten_inputs, remove_columns=[<all columns except output>])\r\n```\r\n\r\nSo I guess converting the datasets into the same format can be left to the user for now. \r\nThen the question is how we can merge the datasets. I would probably be in favor of a simple \r\n\r\n```python \r\ndataset.add()\r\n```\r\n\r\nfunction that checks if the dataset is of the same format and if yes merges the two datasets. Finally, how should the sampling be implemented? **Examples-proportional mixing** corresponds to just merging the datasets and shuffling. For the other two sampling approaches we would need some higher-level features, maybe even a `dataset.sample()` function for merged datasets. \r\n\r\nWhat are your thoughts on this @thomwolf @lhoestq @ghomasHudson @enzoampil ?",
"I agree that we should leave the flattening of the dataset to the user for now. Especially because although the T5 framing seems obvious, there are slight variations on how the T5 authors do it in comparison to other approaches such as gpt-3 and decaNLP.\r\n\r\nIn terms of sampling, Examples-proportional mixing does seem the simplest to implement so would probably be a good starting point.\r\n\r\nTemperature-scaled mixing would probably most useful, offering flexibility as it can simulate the other 2 methods by setting the temperature parameter. There is a [relevant part of the T5 repo](https://github.com/google-research/text-to-text-transfer-transformer/blob/03c94165a7d52e4f7230e5944a0541d8c5710788/t5/data/utils.py#L889-L1118) which should help with implementation.\r\n\r\nAccording to the T5 authors, equal-mixing performs worst. Among the other two methods, tuning the K value (the artificial dataset size limit) has a large impact.\r\n",
"I agree with going with temperature-scaled mixing for its flexibility!\r\n\r\nFor the function that combines the datasets, I also find `dataset.add()` okay while also considering that users may want it to be easy to combine a list of say 10 data sources in one go.\r\n\r\n`dataset.sample()` should also be good. By the looks of it, we're planning to have as main parameters: `temperature`, and `K`.\r\n\r\nOn converting the datasets to the same format, I agree that we can leave these to the users for now. But, I do imagine it'd be an awesome feature for the future to have this automatically handled, based on a chosen *approach* to formatting :smile: \r\n\r\nE.g. T5, GPT-3, decaNLP, original raw formatting, or a contributed way of formatting in text-to-text. ",
"This is an interesting discussion indeed and it would be nice to make multi-task easier.\r\n\r\nProbably the best would be to have a new type of dataset especially designed for that in order to easily combine and sample from the multiple datasets.\r\n\r\nThis way we could probably handle the combination of datasets with differing schemas as well (unlike T5).",
"@thomwolf Are you suggesting making a wrapper class which can take existing datasets as arguments and do all the required sampling/combining, to present the same interface as a normal dataset?\r\n\r\nThat doesn't seem too complicated to implement.\r\n",
"I guess we're looking at the end user writing something like:\r\n``` python\r\nds = nlp.load_dataset('multitask-t5',datasets=[\"squad\",\"cnn_dm\",...], k=1000, t=2.0)\r\n```\r\nUsing the t5 method of combining here (or this could be a function passed in as an arg) \r\n\r\nPassing kwargs to each 'sub-dataset' might become tricky.",
"From thinking upon @thomwolf 's suggestion, I've started experimenting:\r\n```python\r\nclass MultitaskDataset(DatasetBuilder):\r\n def __init__(self, *args, **kwargs):\r\n super(MultitaskDataset, self).__init__(*args, **kwargs)\r\n self._datasets = kwargs.get(\"datasets\")\r\n\r\n def _info(self):\r\n return nlp.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=nlp.Features({\r\n \"source\": nlp.Value(\"string\"),\r\n \"target\": nlp.Sequence(nlp.Value(\"string\"))\r\n })\r\n )\r\n\r\n def _get_common_splits(self):\r\n '''Finds the common splits present in all self._datasets'''\r\n min_set = None\r\n for dataset in self._datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n....\r\n\r\n# Maybe this?:\r\nsquad = nlp.load_dataset(\"squad\")\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\",\"3.0.0\")\r\nmultitask_dataset = nlp.load_dataset(\r\n 'multitask_dataset',\r\n datasets=[squad,cnn_dailymail], \r\n k=1000, \r\n t=2.0\r\n)\r\n\r\n```\r\n\r\nDoes anyone know what methods of `MultitaskDataset` I would need to implement? Maybe `as_dataset` and `download_and_prepare`? Most of these should be just calling the methods of the sub-datasets. \r\n\r\nI'm assuming DatasetBuilder is better than the more specific `GeneratorBasedBuilder`, `BeamBasedBuilder`, etc....\r\n\r\nOne of the other problems is that the dataset size is unknown till you construct it (as you can pick the sub-datasets). Am hoping not to need to make changes to `nlp.load_dataset` just for this class.\r\n\r\nI'd appreciate it if anyone more familiar with nlp's internal workings could tell me if I'm on the right track!",
"I think I would probably go for a `MultiDataset` wrapper around a list of `Dataset`.\r\n\r\nI'm not sure we need to give it `k` and `t` parameters at creation, it can maybe be something along the lines of:\r\n```python\r\nsquad = nlp.load_dataset(\"squad\")\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\",\"3.0.0\")\r\n\r\nmultitask_dataset = nlp.MultiDataset(squad, cnn_dm)\r\n\r\nbatch = multitask_dataset.sample(10, temperature=2.0, k=1000)\r\n```\r\n\r\nThe first proof-of-concept for multi-task datasets could definitely require that the provided datasets have the same name/type for columns (if needed you easily rename/cast a column prior to instantiating the `MultiDataset`).\r\n\r\nIt's good to think about it for some time though and don't overfit too much on the T5 examples (in particular for the ways/kwargs for sampling among datasets).",
"The problem with changing `k` and `t` per sampling is that you'd have to somehow remember which examples you'd already returned while re-weighting the remaining examples based on the new `k` and `t`values. It seems possible but complicated (I can't really see a reason why you'd want to change the weighting of datasets after you constructed the multidataset).\r\n\r\nWouldn't it be convenient if it implemented the dataset interface? Then if someone has code using a single nlp dataset, they can replace it with a multitask combination of more datasets without having to change other code. We would at least need to be able to pass it into a `DataLoader`.\r\n\r\n",
"A very janky (but working) implementation of `multitask_dataset.sample()` could be something like this:\r\n```python\r\nimport nlp\r\nimport torch\r\n\r\nclass MultiDataset():\r\n def __init__(self, *args, temperature=2.0, k=1000, maximum=None, scale=1):\r\n self.datasets = args\r\n self._dataloaders = {}\r\n for split in self._get_common_splits():\r\n split_datasets = [ds[split] for ds in self.datasets]\r\n mixing_rates = self._calc_mixing_rates(split_datasets,temperature, k, maximum, scale)\r\n weights = []\r\n for i in range(len(self.datasets)):\r\n weights += [mixing_rates[i]]*len(self.datasets[i][split])\r\n self._dataloaders[split] = torch.utils.data.DataLoader(torch.utils.data.ConcatDataset(split_datasets),\r\n sampler=torch.utils.data.sampler.WeightedRandomSampler(\r\n num_samples=len(weights),\r\n weights = weights,\r\n replacement=True),\r\n shuffle=False)\r\n\r\n def _get_common_splits(self):\r\n '''Finds the common splits present in all self.datasets'''\r\n min_set = None\r\n for dataset in self.datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n\r\n def _calc_mixing_rates(self,datasets, temperature=2.0, k=1000, maximum=None, scale=1):\r\n '''Work out the weighting of each dataset based on t and k'''\r\n mixing_rates = []\r\n for dataset in datasets:\r\n rate = len(dataset)\r\n rate *= scale\r\n if maximum:\r\n rate = min(rate, maximum)\r\n if temperature != 1.0:\r\n rate = rate ** (1.0/temperature)\r\n mixing_rates.append(rate)\r\n return mixing_rates\r\n\r\n def sample(self,n,split):\r\n batch = []\r\n for example in self._dataloaders[split]:\r\n batch.append(example)\r\n n -= 1\r\n if n == 0:\r\n return batch\r\n\r\n\r\ndef flatten(dataset,flatten_fn):\r\n for k in dataset.keys():\r\n if isinstance(dataset[k],nlp.Dataset):\r\n dataset[k] = dataset[k].map(flatten_fn,remove_columns=dataset[k].column_names)\r\n\r\n# Squad\r\ndef flatten_squad(example):\r\n return {\"source\": \"squad context: \" + example['context'] + \" question: \" + example['question'],\"target\":example[\"answers\"][\"text\"]}\r\nsquad = nlp.load_dataset(\"squad\")\r\nflatten(squad,flatten_squad)\r\n\r\n# CNN_DM\r\ndef flatten_cnn_dm(example):\r\n return {\"source\": \"cnn_dm: \" + example['article'],\"target\":[example[\"highlights\"]]}\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\")\r\nflatten(cnn_dm,flatten_cnn_dm)\r\n\r\nmultitask_dataset = MultiDataset(squad, cnn_dm)\r\nbatch = multitask_dataset.sample(100,\"train\")\r\n```\r\n\r\nThere's definitely a more sensible way than embedding `DataLoader`s inside. ",
"There is an interesting related investigation by @zphang here https://colab.research.google.com/github/zphang/zphang.github.io/blob/master/files/notebooks/Multi_task_Training_with_Transformers_NLP.ipynb",
"Good spot! Here are my thoughts:\r\n\r\n- Aside: Adding `MultitaskModel` to transformers might be a thing to raise - even though having task-specific heads has become unfashionable in recent times in favour of text-to-text type models.\r\n- Adding the task name as an extra field also seems useful for these kind of models which have task-specific heads\r\n- There is some validation of our approach that the user should be expected to `map` datasets into a common form.\r\n- The size-proportional sampling (also called \"Examples-proportional mixing\") used here doesn't perform too badly in the T5 paper (it's comparable to temperature-scaled mixing in many cases but less flexible. This is only reasonable with a `K` maximum size parameter to prevent very large datasets dominating). This might be good for a first prototype using:\r\n ```python\r\n def __iter__(self):\r\n \"\"\"\r\n For each batch, sample a task, and yield a batch from the respective\r\n task Dataloader.\r\n\r\n We use size-proportional sampling, but you could easily modify this\r\n to sample from some-other distribution.\r\n \"\"\"\r\n task_choice_list = []\r\n for i, task_name in enumerate(self.task_name_list):\r\n task_choice_list += [i] * self.num_batches_dict[task_name]\r\n task_choice_list = np.array(task_choice_list)\r\n np.random.shuffle(task_choice_list)\r\n\r\n dataloader_iter_dict = {\r\n task_name: iter(dataloader) \r\n for task_name, dataloader in self.dataloader_dict.items()\r\n }\r\n for task_choice in task_choice_list:\r\n task_name = self.task_name_list[task_choice]\r\n yield next(dataloader_iter_dict[task_name]) \r\n ```\r\n We'd just need to pull samples from the raw datasets and not from `DataLoader`s for each task. We can assume the user has done `dataset.shuffle()` if they want to.\r\n\r\n Other sampling methods can later be implemented by changing how the `task_choice_list` is generated. This should allow more flexibility and not tie us to specific methods for sampling among datasets.\r\n",
"Another thought: Multitasking over benchmarks (represented as Meta-datasets in nlp) is probably a common use case. Would be nice to pass an entire benchmark to our `MultiDataset` wrapper rather than having to pass individual components.",
"Here's a fully working implementation based on the `__iter__` function of @zphang.\r\n\r\n- I've generated the task choice list in the constructor as it allows us to index into the MultiDataset just like a normal dataset. I'm changing `task_choice_list` into a list of `(dataset_idx, example_idx)` so each entry references a unique dataset example. The shuffling has to be done before this as we don't want to shuffle within each task (we assume this is done by the user if this is what they intend).\r\n- I'm slightly concerned this list could become very large if many large datasets were used. Can't see a way round it at the moment though.\r\n- I've used `task.info.builder_name` as the dataset name. Not sure if this is correct.\r\n- I'd love to add some of the other `Dataset` methods (map, slicing by column, etc...). Would be great to implement the whole interface so a single dataset can be simply replaced by this.\r\n- This does everything on the individual example-level. If some application required batches all from a single task in turn we can't really do that.\r\n\r\n```python\r\nimport nlp\r\nimport numpy as np\r\n\r\nclass MultiDataset:\r\n def __init__(self,tasks):\r\n self.tasks = tasks\r\n\r\n # Create random order of tasks\r\n # Using size-proportional sampling\r\n task_choice_list = []\r\n for i, task in enumerate(self.tasks):\r\n task_choice_list += [i] * len(task)\r\n task_choice_list = np.array(task_choice_list)\r\n np.random.shuffle(task_choice_list)\r\n\r\n # Add index into each dataset\r\n # - We don't want to shuffle within each task\r\n counters = {}\r\n self.task_choice_list = []\r\n for i in range(len(task_choice_list)):\r\n idx = counters.get(task_choice_list[i],0)\r\n self.task_choice_list.append((task_choice_list[i],idx))\r\n counters[task_choice_list[i]] = idx + 1\r\n\r\n\r\n def __len__(self):\r\n return np.sum([len(t) for t in self.tasks])\r\n\r\n def __repr__(self):\r\n task_str = \", \".join([str(t) for t in self.tasks])\r\n return f\"MultiDataset(tasks: {task_str})\"\r\n\r\n def __getitem__(self,key):\r\n if isinstance(key, int):\r\n task_idx, example_idx = self.task_choice_list[key]\r\n task = self.tasks[task_idx]\r\n example = task[example_idx]\r\n example[\"task_name\"] = task.info.builder_name\r\n return example\r\n elif isinstance(key, slice):\r\n raise NotImplementedError()\r\n\r\n def __iter__(self):\r\n for i in range(len(self)):\r\n yield self[i]\r\n\r\n\r\ndef load_multitask(*datasets):\r\n '''Create multitask datasets per split'''\r\n\r\n def _get_common_splits(datasets):\r\n '''Finds the common splits present in all self.datasets'''\r\n min_set = None\r\n for dataset in datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n common_splits = _get_common_splits(datasets)\r\n out = {}\r\n for split in common_splits:\r\n out[split] = MultiDataset([d[split] for d in datasets])\r\n return out\r\n\r\n\r\n##########################################\r\n# Dataset Flattening\r\n\r\ndef flatten(dataset,flatten_fn):\r\n for k in dataset.keys():\r\n if isinstance(dataset[k],nlp.Dataset):\r\n dataset[k] = dataset[k].map(flatten_fn,remove_columns=dataset[k].column_names)\r\n\r\n# Squad\r\ndef flatten_squad(example):\r\n return {\"source\": \"squad context: \" + example['context'] + \" question: \" + example['question'],\r\n \"target\":example[\"answers\"][\"text\"]}\r\nsquad = nlp.load_dataset(\"squad\")\r\nflatten(squad,flatten_squad)\r\n\r\n# CNN_DM\r\ndef flatten_cnn_dm(example):\r\n return {\"source\": \"cnn_dm: \" + example['article'],\"target\":[example[\"highlights\"]]}\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\")\r\nflatten(cnn_dm,flatten_cnn_dm)\r\n\r\n#############################################\r\n\r\nmtds = load_multitask(squad,cnn_dm)\r\n\r\nfor example in mtds[\"train\"]:\r\n print(example[\"task_name\"],example[\"target\"])\r\n```\r\nLet me know if you have any thoughts. I've started using this in some of my projects and it seems to work. If people are happy with the general approach for a first version, I can make a pull request.",
"Hey! Happy to jump into the discussion here. I'm still getting familiar with bits of this code, but the reasons I sampled over data loaders rather than datasets is 1) ensuring that each sampled batch corresponds to only 1 task (in case of different inputs formats/downstream models) and 2) potentially having different batch sizes per task (e.g. some tasks have very long/short inputs). How are you currently dealing with these in your PR?",
"The short answer is - I'm not! Everything is currently on a per-example basis. It would be fairly simple to add a `batch_size` argument which would ensure that every `batch_size` examples come from the same task. That should suit most use-cases (unless you wanted to ensure batches all came from the same task and apply something like `SortishSampler` on each task first)\r\n\r\nYour notebook was really inspiring by the way - thanks!",
"@zphang is having different batch sizes per task actually helpful? Would be interesting to know as it's not something I've come across as a technique used by any MTL papers.",
"mt-dnn's [batcher.py](https://github.com/namisan/mt-dnn/blob/master/mt_dnn/batcher.py) might be worth looking at.",
"> @zphang is having different batch sizes per task actually helpful? Would be interesting to know as it's not something I've come across as a technique used by any MTL papers.\r\n\r\nI think having different batch sizes per task is particularly helpful in some scenarios where each task has different amount of data. For example, the problem I'm currently facing is one task has tens of thousands of samples while one task has a couple hundreds. I think in this case different batch size could help. But if using the same batch size is a lot simpler to implement, I guess it makes sense to go with that.",
"I think that instead of proportional to size sampling you should specify weights or probabilities for drawing a batch from each dataset. We should also ensure that the smaller datasets are repeated so that the encoder layer doesn't overtrain on the largest dataset.",
"Are there any references for people doing different batch sizes per task in the literature? I've only seen constant batch sizes with differing numbers of batches for each task which seems sufficient to prevent the impact of large datasets (Read 3.5.3 of the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) for example).\r\n\r\n",
"Hi,\r\nregarding building T5 dataset , I think we can use datasets https://github.com/huggingface/datasets and then need something similar to tf.data.experimental.sample_from_datasets, do you know if similar functionality exist in pytorch? Which can sample multiple datasets with the given rates. thanks. ",
"Is this feature part of a `datasets` release yet? ",
"> Here's a fully working implementation based on the `__iter__` function of @zphang.\r\n> \r\n> * I've generated the task choice list in the constructor as it allows us to index into the MultiDataset just like a normal dataset. I'm changing `task_choice_list` into a list of `(dataset_idx, example_idx)` so each entry references a unique dataset example. The shuffling has to be done before this as we don't want to shuffle within each task (we assume this is done by the user if this is what they intend).\r\n> * I'm slightly concerned this list could become very large if many large datasets were used. Can't see a way round it at the moment though.\r\n> * I've used `task.info.builder_name` as the dataset name. Not sure if this is correct.\r\n> * I'd love to add some of the other `Dataset` methods (map, slicing by column, etc...). Would be great to implement the whole interface so a single dataset can be simply replaced by this.\r\n> * This does everything on the individual example-level. If some application required batches all from a single task in turn we can't really do that.\r\n> \r\n> ```python\r\n> import nlp\r\n> import numpy as np\r\n> \r\n> class MultiDataset:\r\n> def __init__(self,tasks):\r\n> self.tasks = tasks\r\n> \r\n> # Create random order of tasks\r\n> # Using size-proportional sampling\r\n> task_choice_list = []\r\n> for i, task in enumerate(self.tasks):\r\n> task_choice_list += [i] * len(task)\r\n> task_choice_list = np.array(task_choice_list)\r\n> np.random.shuffle(task_choice_list)\r\n> \r\n> # Add index into each dataset\r\n> # - We don't want to shuffle within each task\r\n> counters = {}\r\n> self.task_choice_list = []\r\n> for i in range(len(task_choice_list)):\r\n> idx = counters.get(task_choice_list[i],0)\r\n> self.task_choice_list.append((task_choice_list[i],idx))\r\n> counters[task_choice_list[i]] = idx + 1\r\n> \r\n> \r\n> def __len__(self):\r\n> return np.sum([len(t) for t in self.tasks])\r\n> \r\n> def __repr__(self):\r\n> task_str = \", \".join([str(t) for t in self.tasks])\r\n> return f\"MultiDataset(tasks: {task_str})\"\r\n> \r\n> def __getitem__(self,key):\r\n> if isinstance(key, int):\r\n> task_idx, example_idx = self.task_choice_list[key]\r\n> task = self.tasks[task_idx]\r\n> example = task[example_idx]\r\n> example[\"task_name\"] = task.info.builder_name\r\n> return example\r\n> elif isinstance(key, slice):\r\n> raise NotImplementedError()\r\n> \r\n> def __iter__(self):\r\n> for i in range(len(self)):\r\n> yield self[i]\r\n> \r\n> \r\n> def load_multitask(*datasets):\r\n> '''Create multitask datasets per split'''\r\n> \r\n> def _get_common_splits(datasets):\r\n> '''Finds the common splits present in all self.datasets'''\r\n> min_set = None\r\n> for dataset in datasets:\r\n> if min_set != None:\r\n> min_set.intersection(set(dataset.keys()))\r\n> else:\r\n> min_set = set(dataset.keys())\r\n> return min_set\r\n> \r\n> common_splits = _get_common_splits(datasets)\r\n> out = {}\r\n> for split in common_splits:\r\n> out[split] = MultiDataset([d[split] for d in datasets])\r\n> return out\r\n> \r\n> \r\n> ##########################################\r\n> # Dataset Flattening\r\n> \r\n> def flatten(dataset,flatten_fn):\r\n> for k in dataset.keys():\r\n> if isinstance(dataset[k],nlp.Dataset):\r\n> dataset[k] = dataset[k].map(flatten_fn,remove_columns=dataset[k].column_names)\r\n> \r\n> # Squad\r\n> def flatten_squad(example):\r\n> return {\"source\": \"squad context: \" + example['context'] + \" question: \" + example['question'],\r\n> \"target\":example[\"answers\"][\"text\"]}\r\n> squad = nlp.load_dataset(\"squad\")\r\n> flatten(squad,flatten_squad)\r\n> \r\n> # CNN_DM\r\n> def flatten_cnn_dm(example):\r\n> return {\"source\": \"cnn_dm: \" + example['article'],\"target\":[example[\"highlights\"]]}\r\n> cnn_dm = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\")\r\n> flatten(cnn_dm,flatten_cnn_dm)\r\n> \r\n> #############################################\r\n> \r\n> mtds = load_multitask(squad,cnn_dm)\r\n> \r\n> for example in mtds[\"train\"]:\r\n> print(example[\"task_name\"],example[\"target\"])\r\n> ```\r\n> \r\n> Let me know if you have any thoughts. I've started using this in some of my projects and it seems to work. If people are happy with the general approach for a first version, I can make a pull request.\r\n\r\nNot sure if this is what I'm looking for, but I implemented a version of Examples-Proportional mixing supporting only the basic feature [here](https://stackoverflow.com/a/74070116/10732321), seems to work in my project. ",
"You can use `interleave_datasets` to mix several datasets together. By default it alternates between all the datasets, but you can also provide sampling probabilities if you want to oversample from one of the datasets\r\n\r\n```python\r\nfrom datasets import load_dataset, interleave_datasets\r\n\r\nsquad = load_dataset(\"squad\", split=\"train\")\r\ncnn_dm = load_dataset(\"cnn_dailymail\", \"3.0.0\", split=\"train\")\r\nds = interleave_datasets([squad, cnn_dm])\r\n\r\nprint(ds[0])\r\n# {'id': '5733be284776f41900661182',\r\n# 'title': 'University_of_Notre_Dame',\r\n# 'context': 'Architecturally, the school has a Catholic character...',\r\n# 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?',\r\n# 'answers': {'text': ['Saint Bernadette Soubirous'], 'answer_start': [515]},\r\n# 'article': None,\r\n# 'highlights': None}\r\nprint(ds[1])\r\n# {'id': '42c027e4ff9730fbb3de84c1af0d2c506e41c3e4',\r\n# 'title': None,\r\n# 'context': None,\r\n# 'question': None,\r\n# 'answers': None,\r\n# 'article': 'LONDON, England (Reuters) -- Harry Potter star Daniel Radcliffe...',\r\n# 'highlights': \"Harry Potter star Daniel Radcliffe...\"}\r\n```\r\n\r\nsee docs at https://huggingface.co/docs/datasets/v2.6.1/en/package_reference/main_classes#datasets.interleave_datasets",
"I also have this implementation of multi-task sampler here which I used it to tune T5: https://github.com/rabeehk/hyperformer/blob/main/hyperformer/data/multitask_sampler.py "
] |
https://api.github.com/repos/huggingface/datasets/issues/216 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/216/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/216/comments | https://api.github.com/repos/huggingface/datasets/issues/216/events | https://github.com/huggingface/datasets/issues/216 | 626,896,890 | MDU6SXNzdWU2MjY4OTY4OTA= | 216 | ❓ How to get ROUGE-2 with the ROUGE metric ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/astariul",
"id": 43774355,
"login": "astariul",
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"repos_url": "https://api.github.com/users/astariul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/astariul"
} | [] | closed | false | null | [] | null | 3 | "2020-05-28T23:47:32Z" | "2020-06-01T00:04:35Z" | "2020-06-01T00:04:35Z" | NONE | null | null | null | I'm trying to use ROUGE metric, but I don't know how to get the ROUGE-2 metric.
---
I compute scores with :
```python
import nlp
rouge = nlp.load_metric('rouge')
with open("pred.txt") as p, open("ref.txt") as g:
for lp, lg in zip(p, g):
rouge.add([lp], [lg])
score = rouge.compute()
```
then : _(print only the F-score for readability)_
```python
for k, s in score.items():
print(k, s.mid.fmeasure)
```
It gives :
>rouge1 0.7915168355671788
rougeL 0.7915168355671788
---
**How can I get the ROUGE-2 score ?**
Also, it's seems weird that ROUGE-1 and ROUGE-L scores are the same. Did I made a mistake ?
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/216/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/216/timeline | null | completed | false | [
"ROUGE-1 and ROUGE-L shouldn't return the same thing. This is weird",
"For the rouge2 metric you can do\r\n\r\n```python\r\nrouge = nlp.load_metric('rouge')\r\nwith open(\"pred.txt\") as p, open(\"ref.txt\") as g:\r\n for lp, lg in zip(p, g):\r\n rouge.add(lp, lg)\r\nscore = rouge.compute(rouge_types=[\"rouge2\"])\r\n```\r\n\r\nNote that I just did a PR to have both `.add` and `.add_batch` for metrics, that's why now this is `rouge.add(lp, lg)` and not `rouge.add([lp], [lg])`",
"Well I just tested with the official script and both rouge1 and rougeL return exactly the same thing for the input you gave, so this is actually fine ^^\r\n\r\nI hope it helped :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/215 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/215/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/215/comments | https://api.github.com/repos/huggingface/datasets/issues/215/events | https://github.com/huggingface/datasets/issues/215 | 626,867,879 | MDU6SXNzdWU2MjY4Njc4Nzk= | 215 | NonMatchingSplitsSizesError when loading blog_authorship_corpus | {
"avatar_url": "https://avatars.githubusercontent.com/u/52105365?v=4",
"events_url": "https://api.github.com/users/cedricconol/events{/privacy}",
"followers_url": "https://api.github.com/users/cedricconol/followers",
"following_url": "https://api.github.com/users/cedricconol/following{/other_user}",
"gists_url": "https://api.github.com/users/cedricconol/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cedricconol",
"id": 52105365,
"login": "cedricconol",
"node_id": "MDQ6VXNlcjUyMTA1MzY1",
"organizations_url": "https://api.github.com/users/cedricconol/orgs",
"received_events_url": "https://api.github.com/users/cedricconol/received_events",
"repos_url": "https://api.github.com/users/cedricconol/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cedricconol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cedricconol/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cedricconol"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | 10 | "2020-05-28T22:55:19Z" | "2023-03-30T15:16:44Z" | "2022-02-10T13:05:45Z" | NONE | null | null | null | Getting this error when i run `nlp.load_dataset('blog_authorship_corpus')`.
```
raise NonMatchingSplitsSizesError(str(bad_splits))
nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train',
num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'),
'recorded': SplitInfo(name='train', num_bytes=616473500, num_examples=536323,
dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation',
num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'),
'recorded': SplitInfo(name='validation', num_bytes=30786661, num_examples=27766,
dataset_name='blog_authorship_corpus')}]
```
Upon checking it seems like there is a disparity between the information in `datasets/blog_authorship_corpus/dataset_infos.json` and what was downloaded. Although I can get away with this by passing `ignore_verifications=True` in `load_dataset`, I'm thinking doing so might give problems later on. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/215/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/215/timeline | null | completed | false | [
"I just ran it on colab and got this\r\n```\r\n[{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812,\r\ndataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train',\r\nnum_bytes=611607465, num_examples=533285, dataset_name='blog_authorship_corpus')},\r\n{'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277,\r\ndataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation',\r\nnum_bytes=35652716, num_examples=30804, dataset_name='blog_authorship_corpus')}]\r\n```\r\nwhich is different from the `dataset_infos.json` and also different from yours.\r\n\r\nIt looks like the script for generating examples is not consistent",
"The files provided by the authors are corrupted and the script seems to ignore the xml files that can't be decoded (it does `try:... except UnicodeDecodeError`). Maybe depending of the environment some files can be opened and some others don't but not sure why",
"Feel free to do `ignore_verifications=True` for now... The verifications only include a check on the checksums of the downloaded files, and a check on the number of examples in each splits.",
"I'm getting this same issue when loading the `imdb` corpus via `dataset = load_dataset(\"imdb\")`. When I try `ignore_verifications=True`, no examples are read into the `train` portion of the dataset. ",
"> I'm getting this same issue when loading the `imdb` corpus via `dataset = load_dataset(\"imdb\")`. When I try `ignore_verifications=True`, no examples are read into the `train` portion of the dataset.\r\n\r\nWhen the checksums don't match, it may mean that the file you downloaded is corrupted. In this case you can try to load the dataset again `load_dataset(\"imdb\", download_mode=\"force_redownload\")`\r\n\r\nAlso I just checked on my side and it worked fine:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imdb\")\r\nprint(len(dataset[\"train\"]))\r\n# 25000\r\n```\r\n\r\nLet me know if redownloading fixes your issue @EmilyAlsentzer .\r\nIf not, feel free to open a separate issue.",
"It doesn't seem to fix the problem. I'll open a separate issue. Thanks. ",
"I wasn't aware of the \"force_redownload\" option and manually removed the '/home/me/.cache/huggingface/datasets/' dir, this worked for me (dataset 'cnn_dailymail')",
"Yes I think this might not be documented well enough. Let’s add it to the doc @lhoestq @SBrandeis.\r\nAnd everything on how to control the cache behavior better (removing, overriding, changing the path, etc)",
"Already fixed:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"blog_authorship_corpus\")\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'date', 'gender', 'age', 'horoscope', 'job'],\r\n num_rows: 689793\r\n })\r\n validation: Dataset({\r\n features: ['text', 'date', 'gender', 'age', 'horoscope', 'job'],\r\n num_rows: 37919\r\n })\r\n})\r\n",
"In my case, I had to remove the cache datasets directory completely as @putssander suggested, the download_mode='forced_redownload' was insufficient.\r\n\r\nI had a private repository with data files that I loaded with a loading script. It was working fine until I pushed a new version of the data files and then the NonMatchingSplitsSizesError was raised.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/214/comments | https://api.github.com/repos/huggingface/datasets/issues/214/events | https://github.com/huggingface/datasets/pull/214 | 626,641,549 | MDExOlB1bGxSZXF1ZXN0NDI0NTk1NjIx | 214 | [arrow_dataset.py] add new filter function | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 13 | "2020-05-28T16:21:40Z" | "2020-05-29T11:43:29Z" | "2020-05-29T11:32:20Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/214.diff",
"html_url": "https://github.com/huggingface/datasets/pull/214",
"merged_at": "2020-05-29T11:32:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/214.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/214"
} | The `.map()` function is super useful, but can IMO a bit tedious when filtering certain examples.
I think, filtering out examples is also a very common operation people would like to perform on datasets.
This PR is a proposal to add a `.filter()` function in the same spirit than the `.map()` function.
Here is a sample code you can play around with:
```python
ds = nlp.load_dataset("squad", split="validation[:10%]")
def remove_under_idx_5(example, idx):
return idx < 5
def only_keep_examples_with_is_in_context(example):
return "is" in example["context"]
result_keep_only_first_5 = ds.filter(remove_under_idx_5, with_indices=True, load_from_cache_file=False)
result_keep_examples_with_is_in_context = ds.filter(only_keep_examples_with_is_in_context, load_from_cache_file=False)
print("Original number of examples: {}".format(len(ds)))
print("First five examples number of examples: {}".format(len(result_keep_only_first_5)))
print("Is in context examples number of examples: {}".format(len(result_keep_examples_with_is_in_context)))
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/214/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/214/timeline | null | null | true | [
"I agree that a `.filter` method would be VERY useful and appreciated. I'm not a big fan of using `flatten_nested` as it completely breaks down the structure of the example and it may create bugs. Right now I think it may not work for nested structures. Maybe there's a simpler way that we've not figured out yet.",
"Instead of flattening everything and rebuilding the example, maybe we can try to access the examples like this:\r\n```python\r\nfor i in range(num_examples):\r\n example = map_nested(lambda x: x[i], batch)\r\n # ... then test to keep it or not\r\n```",
"> Instead of flattening everything and rebuilding the example, maybe we can try to access the examples like this:\r\n> \r\n> ```python\r\n> for i in range(num_examples):\r\n> example = map_nested(lambda x: x[i], batch)\r\n> # ... then test to keep it or not\r\n> ```\r\n\r\nAwesome I'll check it out :-) ",
"> Instead of flattening everything and rebuilding the example, maybe we can try to access the examples like this:\r\n> \r\n> ```python\r\n> for i in range(num_examples):\r\n> example = map_nested(lambda x: x[i], batch)\r\n> # ... then test to keep it or not\r\n> ```\r\n\r\nAwesome this function is definitely much nicer!",
"Actually I just realized that `map_nested` might not work either as it applies the function at the very last list of the structure. However we can imagine that a single example has also a list in its structure:\r\n```python\r\none_example = {\r\n \"title\": \"blabla\",\r\n \"paragraphs\": [\r\n \"p1\", \"p2\", ...\r\n ]\r\n}\r\n```",
"We'll probably have to take into account the `dset._data.schema` to extract the examples from the batch.",
"> Actually I just realized that `map_nested` might not work either as it applies the function at the very last list of the structure. However we can imagine that a single example has also a list in its structure:\r\n> \r\n> ```python\r\n> one_example = {\r\n> \"title\": \"blabla\",\r\n> \"paragraphs\": [\r\n> \"p1\", \"p2\", ...\r\n> ]\r\n> }\r\n> ```\r\n\r\nThey both work. I'm using it on trivia_qa which is pretty nested. If you use the option `dict_only=True` I think it's fine.",
"> We'll probably have to take into account the `dset._data.schema` to extract the examples from the batch.\r\n\r\nWhy? ",
"Actually it's fine. I guess this is going to be yet another thing to be unit-tested just to make sure ^^",
"Yes, I will need to add tests and documentation! \r\n@thomwolf - would a function like this be ok? It abstracts `.map()` a bit which might be hard to understand. ",
"I tried on some datasets with nested structure and it works fine ! Great work :D \r\n",
"Awesome :-), I will add documentation and some simple unittests",
"Ok merging!"
] |
https://api.github.com/repos/huggingface/datasets/issues/213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/213/comments | https://api.github.com/repos/huggingface/datasets/issues/213/events | https://github.com/huggingface/datasets/pull/213 | 626,587,995 | MDExOlB1bGxSZXF1ZXN0NDI0NTUxODE3 | 213 | better message if missing beam options | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-05-28T15:06:57Z" | "2020-05-29T09:51:17Z" | "2020-05-29T09:51:16Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/213.diff",
"html_url": "https://github.com/huggingface/datasets/pull/213",
"merged_at": "2020-05-29T09:51:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/213.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/213"
} | WDYT @yjernite ?
For example:
```python
dataset = nlp.load_dataset('wikipedia', '20200501.aa')
```
Raises:
```
MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20200501.aa', beam_runner='DirectRunner')`
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/213/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/213/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/212/comments | https://api.github.com/repos/huggingface/datasets/issues/212/events | https://github.com/huggingface/datasets/pull/212 | 626,580,198 | MDExOlB1bGxSZXF1ZXN0NDI0NTQ1NjAy | 212 | have 'add' and 'add_batch' for metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-05-28T14:56:47Z" | "2020-05-29T10:41:05Z" | "2020-05-29T10:41:04Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/212.diff",
"html_url": "https://github.com/huggingface/datasets/pull/212",
"merged_at": "2020-05-29T10:41:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/212.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/212"
} | This should fix #116
Previously the `.add` method of metrics expected a batch of examples.
Now `.add` expects one prediction/reference and `.add_batch` expects a batch.
I think it is more coherent with the way the ArrowWriter works. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/212/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/212/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/211/comments | https://api.github.com/repos/huggingface/datasets/issues/211/events | https://github.com/huggingface/datasets/issues/211 | 626,565,994 | MDU6SXNzdWU2MjY1NjU5OTQ= | 211 | [Arrow writer, Trivia_qa] Could not convert TagMe with type str: converting to null type | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
},
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
},
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 7 | "2020-05-28T14:38:14Z" | "2020-07-23T10:15:16Z" | "2020-07-23T10:15:16Z" | CONTRIBUTOR | null | null | null | Running the following code
```
import nlp
ds = nlp.load_dataset("trivia_qa", "rc", split="validation[:1%]") # this might take 2.3 min to download but it's cached afterwards...
ds.map(lambda x: x, load_from_cache_file=False)
```
triggers a `ArrowInvalid: Could not convert TagMe with type str: converting to null type` error.
On the other hand if we remove a certain column of `trivia_qa` which seems responsible for the bug, it works:
```
import nlp
ds = nlp.load_dataset("trivia_qa", "rc", split="validation[:1%]") # this might take 2.3 min to download but it's cached afterwards...
ds.map(lambda x: x, remove_columns=["entity_pages"], load_from_cache_file=False)
```
. Seems quite hard to debug what's going on here... @lhoestq @thomwolf - do you have a good first guess what the problem could be?
**Note** BTW: I think this could be a good test to check that the datasets work correctly: Take a tiny portion of the dataset and check that it can be written correctly. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/211/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/211/timeline | null | completed | false | [
"Here the full error trace:\r\n\r\n```\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-1-7aaf3f011358> in <module>\r\n 1 import nlp\r\n 2 ds = nlp.load_dataset(\"trivia_qa\", \"rc\", split=\"validation[:1%]\") # this might take 2.3 min to download but it's cached afterwards...\r\n----> 3 ds.map(lambda x: x, load_from_cache_file=False)\r\n\r\n~/python_bin/nlp/arrow_dataset.py in map(self, function, with_indices, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, arrow_schema, disable_nullable)\r\n 549\r\n 550 if update_data:\r\n--> 551 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file\r\n 552\r\n 553 # Create new Dataset from buffer or file\r\n\r\n~/python_bin/nlp/arrow_writer.py in finalize(self, close_stream)\r\n 182 def finalize(self, close_stream=True):\r\n 183 if self.pa_writer is not None:\r\n--> 184 self.write_on_file()\r\n 185 self.pa_writer.close()\r\n 186 if close_stream:\r\n\r\n~/python_bin/nlp/arrow_writer.py in write_on_file(self)\r\n 104 \"\"\"\r\n 105 if self.current_rows:\r\n--> 106 pa_array = pa.array(self.current_rows, type=self._type)\r\n 107 first_example = pa.array(self.current_rows[0:1], type=self._type)[0]\r\n 108 # Sanity check\r\n\r\n~/hugging_face/venv_3.7/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.array()\r\n\r\n~/hugging_face/venv_3.7/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()\r\n\r\n~/hugging_face/venv_3.7/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: Could not convert TagMe with type str: converting to null type\r\n```",
"Actually thinking a bit more about it, it's probably a data sample that is not correct in `trivia_qa`. But I'm a bit surprised though that we managed to write it in .arrow format and now cannot write it anymore after an \"identity\" mapping.",
"I don't have this error :x",
"Interesting, maybe I have a very old cache of trivia_qa...thanks for checking",
"I'm running it right now on colab to double check",
"Actually, I know what the problem is...I'm quite sure it's a bug. Here we take some test inputs: https://github.com/huggingface/nlp/blob/0e0ef12c14d2175e0b0bd7d8aa814b09e2cd7e1f/src/nlp/arrow_dataset.py#L472\r\n\r\nIt might be that in the test inputs, a `Sequence` type value is an emtpy list. So in my case I have `ds[0][\"entity_pages'][\"wiki_context\"] = []`. => this leads to an `arrow_schema` equal to `null` for `[\"entity_pages'][\"wiki_context\"]` => see line: https://github.com/huggingface/nlp/blob/0e0ef12c14d2175e0b0bd7d8aa814b09e2cd7e1f/src/nlp/arrow_dataset.py#L501 instead of list of string which it should for other examples. \r\n\r\nGuess it's an edge case, but it can happen.",
"Good point, I think the schema should be infered at the writing stage where we have a `writer_batch_size` number of examples (typically 10k) so it's even less likely to run into this scenario."
] |
https://api.github.com/repos/huggingface/datasets/issues/210 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/210/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/210/comments | https://api.github.com/repos/huggingface/datasets/issues/210/events | https://github.com/huggingface/datasets/pull/210 | 626,504,243 | MDExOlB1bGxSZXF1ZXN0NDI0NDgyNDgz | 210 | fix xnli metric kwargs description | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-05-28T13:21:44Z" | "2020-05-28T13:22:11Z" | "2020-05-28T13:22:10Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/210.diff",
"html_url": "https://github.com/huggingface/datasets/pull/210",
"merged_at": "2020-05-28T13:22:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/210.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/210"
} | The text was wrong as noticed in #202 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/210/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/210/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/209 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/209/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/209/comments | https://api.github.com/repos/huggingface/datasets/issues/209/events | https://github.com/huggingface/datasets/pull/209 | 626,405,849 | MDExOlB1bGxSZXF1ZXN0NDI0NDAwOTc4 | 209 | Add a Google Drive exception for small files | {
"avatar_url": "https://avatars.githubusercontent.com/u/25703835?v=4",
"events_url": "https://api.github.com/users/airKlizz/events{/privacy}",
"followers_url": "https://api.github.com/users/airKlizz/followers",
"following_url": "https://api.github.com/users/airKlizz/following{/other_user}",
"gists_url": "https://api.github.com/users/airKlizz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/airKlizz",
"id": 25703835,
"login": "airKlizz",
"node_id": "MDQ6VXNlcjI1NzAzODM1",
"organizations_url": "https://api.github.com/users/airKlizz/orgs",
"received_events_url": "https://api.github.com/users/airKlizz/received_events",
"repos_url": "https://api.github.com/users/airKlizz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/airKlizz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/airKlizz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/airKlizz"
} | [] | closed | false | null | [] | null | 3 | "2020-05-28T10:40:17Z" | "2020-05-28T15:15:04Z" | "2020-05-28T15:15:04Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/209.diff",
"html_url": "https://github.com/huggingface/datasets/pull/209",
"merged_at": "2020-05-28T15:15:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/209.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/209"
} | I tried to use the ``nlp`` library to load personnal datasets. I mainly copy-paste the code for ``multi-news`` dataset because my files are stored on Google Drive.
One of my dataset is small (< 25Mo) so it can be verified by Drive without asking the authorization to the user. This makes the download starts directly.
Currently the ``nlp`` raises a error: ``ConnectionError: Couldn't reach https://drive.google.com/uc?export=download&id=1DGnbUY9zwiThTdgUvVTSAvSVHoloCgun`` while the url is working. So I just add a new exception as you have already done for ``firebasestorage.googleapis.com`` :
```
elif (response.status_code == 400 and "firebasestorage.googleapis.com" in url) or (response.status_code == 405 and "drive.google.com" in url)
```
I make an example of the error that you can run on [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1ae_JJ9uvUt-9GBh0uGZhjbF5aXkl-BPv?usp=sharing)
I avoid the error by adding an exception but there is maybe a proper way to do it.
Many thanks :hugs:
Best, | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/209/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/209/timeline | null | null | true | [
"Can you run the style formatting tools to pass the code quality test?\r\n\r\nYou can find all the details in CONTRIBUTING.md: https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-contribute-to-nlp",
"Nice ! ",
"``make style`` done! Thanks for the approvals."
] |
https://api.github.com/repos/huggingface/datasets/issues/208 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/208/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/208/comments | https://api.github.com/repos/huggingface/datasets/issues/208/events | https://github.com/huggingface/datasets/pull/208 | 626,398,519 | MDExOlB1bGxSZXF1ZXN0NDI0Mzk0ODIx | 208 | [Dummy data] insert config name instead of config | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 0 | "2020-05-28T10:28:19Z" | "2020-05-28T12:48:01Z" | "2020-05-28T12:48:00Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/208.diff",
"html_url": "https://github.com/huggingface/datasets/pull/208",
"merged_at": "2020-05-28T12:48:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/208.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/208"
} | Thanks @yjernite for letting me know. in the dummy data command the config name shuold be passed to the dataset builder and not the config itself.
Also, @lhoestq fixed small import bug introduced by beam command I think. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/208/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/208/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/207 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/207/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/207/comments | https://api.github.com/repos/huggingface/datasets/issues/207/events | https://github.com/huggingface/datasets/issues/207 | 625,932,200 | MDU6SXNzdWU2MjU5MzIyMDA= | 207 | Remove test set from NLP viewer | {
"avatar_url": "https://avatars.githubusercontent.com/u/748399?v=4",
"events_url": "https://api.github.com/users/chrisdonahue/events{/privacy}",
"followers_url": "https://api.github.com/users/chrisdonahue/followers",
"following_url": "https://api.github.com/users/chrisdonahue/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisdonahue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chrisdonahue",
"id": 748399,
"login": "chrisdonahue",
"node_id": "MDQ6VXNlcjc0ODM5OQ==",
"organizations_url": "https://api.github.com/users/chrisdonahue/orgs",
"received_events_url": "https://api.github.com/users/chrisdonahue/received_events",
"repos_url": "https://api.github.com/users/chrisdonahue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chrisdonahue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisdonahue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chrisdonahue"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | 3 | "2020-05-27T18:32:07Z" | "2022-02-10T13:17:45Z" | "2022-02-10T13:17:45Z" | NONE | null | null | null | While the new [NLP viewer](https://huggingface.co/nlp/viewer/) is a great tool, I think it would be best to outright remove the option of looking at the test sets. At the very least, a warning should be displayed to users before showing the test set. Newcomers to the field might not be aware of best practices, and small things like this can help increase awareness. | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/207/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/207/timeline | null | completed | false | [
"~is the viewer also open source?~\r\n[is a streamlit app!](https://docs.streamlit.io/en/latest/getting_started.html)",
"Appears that [two thirds of those polled on Twitter](https://twitter.com/srush_nlp/status/1265734497632477185) are in favor of _some_ mechanism for averting eyeballs from the test data.",
"We do no longer use datasets-viewer."
] |
https://api.github.com/repos/huggingface/datasets/issues/206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/206/comments | https://api.github.com/repos/huggingface/datasets/issues/206/events | https://github.com/huggingface/datasets/issues/206 | 625,842,989 | MDU6SXNzdWU2MjU4NDI5ODk= | 206 | [Question] Combine 2 datasets which have the same columns | {
"avatar_url": "https://avatars.githubusercontent.com/u/25703835?v=4",
"events_url": "https://api.github.com/users/airKlizz/events{/privacy}",
"followers_url": "https://api.github.com/users/airKlizz/followers",
"following_url": "https://api.github.com/users/airKlizz/following{/other_user}",
"gists_url": "https://api.github.com/users/airKlizz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/airKlizz",
"id": 25703835,
"login": "airKlizz",
"node_id": "MDQ6VXNlcjI1NzAzODM1",
"organizations_url": "https://api.github.com/users/airKlizz/orgs",
"received_events_url": "https://api.github.com/users/airKlizz/received_events",
"repos_url": "https://api.github.com/users/airKlizz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/airKlizz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/airKlizz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/airKlizz"
} | [] | closed | false | null | [] | null | 2 | "2020-05-27T16:25:52Z" | "2020-06-10T09:11:14Z" | "2020-06-10T09:11:14Z" | CONTRIBUTOR | null | null | null | Hi,
I am using ``nlp`` to load personal datasets. I created summarization datasets in multi-languages based on wikinews. I have one dataset for english and one for german (french is getting to be ready as well). I want to keep these datasets independent because they need different pre-processing (add different task-specific prefixes for T5 : *summarize:* for english and *zusammenfassen:* for german)
My issue is that I want to train T5 on the combined english and german datasets to see if it improves results. So I would like to combine 2 datasets (which have the same columns) to make one and train T5 on it. I was wondering if there is a proper way to do it? I assume that it can be done by combining all examples of each dataset but maybe you have a better solution.
Hoping this is clear enough,
Thanks a lot 😊
Best | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/206/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/206/timeline | null | completed | false | [
"We are thinking about ways to combine datasets for T5 in #217, feel free to share your thoughts about this.",
"Ok great! I will look at it. Thanks"
] |
https://api.github.com/repos/huggingface/datasets/issues/205 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/205/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/205/comments | https://api.github.com/repos/huggingface/datasets/issues/205/events | https://github.com/huggingface/datasets/pull/205 | 625,839,335 | MDExOlB1bGxSZXF1ZXN0NDIzOTY2ODE1 | 205 | Better arrow dataset iter | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-05-27T16:20:21Z" | "2020-05-27T16:39:58Z" | "2020-05-27T16:39:56Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/205.diff",
"html_url": "https://github.com/huggingface/datasets/pull/205",
"merged_at": "2020-05-27T16:39:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/205.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/205"
} | I tried to play around with `tf.data.Dataset.from_generator` and I found out that the `__iter__` that we have for `nlp.arrow_dataset.Dataset` ignores the format that has been set (torch or tensorflow).
With these changes I should be able to come up with a `tf.data.Dataset` that uses lazy loading, as asked in #193. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/205/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/205/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/204 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/204/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/204/comments | https://api.github.com/repos/huggingface/datasets/issues/204/events | https://github.com/huggingface/datasets/pull/204 | 625,655,849 | MDExOlB1bGxSZXF1ZXN0NDIzODE5MTQw | 204 | Add Dataflow support + Wikipedia + Wiki40b | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-05-27T12:32:49Z" | "2020-05-28T08:10:35Z" | "2020-05-28T08:10:34Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/204.diff",
"html_url": "https://github.com/huggingface/datasets/pull/204",
"merged_at": "2020-05-28T08:10:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/204.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/204"
} | # Add Dataflow support + Wikipedia + Wiki40b
## Support datasets processing with Apache Beam
Some datasets are too big to be processed on a single machine, for example: wikipedia, wiki40b, etc. Apache Beam allows to process datasets on many execution engines like Dataflow, Spark, Flink, etc.
To process such datasets with Beam, I added a command to run beam pipelines `nlp-cli run_beam path/to/dataset/script`. Then I used it to process the english + french wikipedia, and the english of wiki40b.
The processed arrow files are on GCS and are the result of a Dataflow job.
I added a markdown documentation file in `docs` that explains how to use it properly.
## Load already processed datasets
Now that we have those datasets already processed, I made it possible to load datasets that are already processed. You can do `load_dataset('wikipedia', '20200501.en')` and it will download the processed files from the Hugging Face GCS directly into the user's cache and be ready to use !
The Wikipedia dataset was already asked in #187 and this PR should soon allow to add Natural Questions as asked in #129
## Other changes in the code
To make things work, I had to do a few adjustments:
- add a `ship_files_with_pipeline` method to the `DownloadManager`. This is because beam pipelines can be run in the cloud and therefore need to have access to your downloaded data. I used it in the wikipedia script:
```python
if not pipeline.is_local():
downloaded_files = dl_manager.ship_files_with_pipeline(downloaded_files, pipeline)
```
- add parquet to arrow conversion. This is because the output of beam pipelines are parquet files so we need to convert them to arrow and have the arrow files on GCS
- add a test script with a dummy beam dataset
- minor adjustments to allow read/write operations on remote files using `apache_beam.io.filesystems.FileSystems` if we want (it can be connected to gcp, s3, hdfs, etc...) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/204/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/204/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/203/comments | https://api.github.com/repos/huggingface/datasets/issues/203/events | https://github.com/huggingface/datasets/pull/203 | 625,515,488 | MDExOlB1bGxSZXF1ZXN0NDIzNzEyMTQ3 | 203 | Raise an error if no config name for datasets like glue | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-05-27T09:03:58Z" | "2020-05-27T16:40:39Z" | "2020-05-27T16:40:38Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/203.diff",
"html_url": "https://github.com/huggingface/datasets/pull/203",
"merged_at": "2020-05-27T16:40:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/203.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/203"
} | Some datasets like glue (see #130) and scientific_papers (see #197) have many configs.
For example for glue there are cola, sst2, mrpc etc.
Currently if a user does `load_dataset('glue')`, then Cola is loaded by default and it can be confusing. Instead, we should raise an error to let the user know that he has to pick one of the available configs (as proposed in #152). For example for glue, the message looks like:
```
ValueError: Config name is missing.
Please pick one among the available configs: ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax']
Example of usage:
`load_dataset('glue', 'cola')`
```
The error is raised if the config name is missing and if there are >=2 possible configs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/203/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/203/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/202/comments | https://api.github.com/repos/huggingface/datasets/issues/202/events | https://github.com/huggingface/datasets/issues/202 | 625,493,983 | MDU6SXNzdWU2MjU0OTM5ODM= | 202 | Mistaken `_KWARGS_DESCRIPTION` for XNLI metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/33572125?v=4",
"events_url": "https://api.github.com/users/phiyodr/events{/privacy}",
"followers_url": "https://api.github.com/users/phiyodr/followers",
"following_url": "https://api.github.com/users/phiyodr/following{/other_user}",
"gists_url": "https://api.github.com/users/phiyodr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/phiyodr",
"id": 33572125,
"login": "phiyodr",
"node_id": "MDQ6VXNlcjMzNTcyMTI1",
"organizations_url": "https://api.github.com/users/phiyodr/orgs",
"received_events_url": "https://api.github.com/users/phiyodr/received_events",
"repos_url": "https://api.github.com/users/phiyodr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/phiyodr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phiyodr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/phiyodr"
} | [] | closed | false | null | [] | null | 1 | "2020-05-27T08:34:42Z" | "2020-05-28T13:22:36Z" | "2020-05-28T13:22:36Z" | NONE | null | null | null | Hi!
The [`_KWARGS_DESCRIPTION`](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/xnli/xnli.py#L45) for the XNLI metric uses `Args` and `Returns` text from [BLEU](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/bleu/bleu.py#L58) metric:
```
_KWARGS_DESCRIPTION = """
Computes XNLI score which is just simple accuracy.
Args:
predictions: list of translations to score.
Each translation should be tokenized into a list of tokens.
references: list of lists of references for each translation.
Each reference should be tokenized into a list of tokens.
max_order: Maximum n-gram order to use when computing BLEU score.
smooth: Whether or not to apply Lin et al. 2004 smoothing.
Returns:
'bleu': bleu score,
'precisions': geometric mean of n-gram precisions,
'brevity_penalty': brevity penalty,
'length_ratio': ratio of lengths,
'translation_length': translation_length,
'reference_length': reference_length
"""
```
But it should be something like:
```
_KWARGS_DESCRIPTION = """
Computes XNLI score which is just simple accuracy.
Args:
predictions: Predicted labels.
references: Ground truth labels.
Returns:
'accuracy': accuracy
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/202/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/202/timeline | null | completed | false | [
"Indeed, good catch ! thanks\r\nFixing it right now"
] |
https://api.github.com/repos/huggingface/datasets/issues/201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/201/comments | https://api.github.com/repos/huggingface/datasets/issues/201/events | https://github.com/huggingface/datasets/pull/201 | 625,235,430 | MDExOlB1bGxSZXF1ZXN0NDIzNDkzNTMw | 201 | Fix typo in README | {
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LysandreJik",
"id": 30755778,
"login": "LysandreJik",
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LysandreJik"
} | [] | closed | false | null | [] | null | 2 | "2020-05-26T22:18:21Z" | "2020-05-26T23:40:31Z" | "2020-05-26T23:00:56Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/201.diff",
"html_url": "https://github.com/huggingface/datasets/pull/201",
"merged_at": "2020-05-26T23:00:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/201.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/201"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/201/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/201/timeline | null | null | true | [
"Amazing, @LysandreJik!",
"Really did my best!"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/200/comments | https://api.github.com/repos/huggingface/datasets/issues/200/events | https://github.com/huggingface/datasets/pull/200 | 625,226,638 | MDExOlB1bGxSZXF1ZXN0NDIzNDg2NTM0 | 200 | [ArrowWriter] Set schema at first write example | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 1 | "2020-05-26T21:59:48Z" | "2020-05-27T09:07:54Z" | "2020-05-27T09:07:53Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/200.diff",
"html_url": "https://github.com/huggingface/datasets/pull/200",
"merged_at": "2020-05-27T09:07:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/200.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/200"
} | Right now if the schema was not specified when instantiating `ArrowWriter`, then it could be set with the first `write_table` for example (it calls `self._build_writer()` to do so).
I noticed that it was not done if the first example is added via `.write`, so I added it for coherence. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/200/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/200/timeline | null | null | true | [
"Good point!\r\n\r\nI guess we could add this to `write_batch` as well (before using `self._schema` in the first line of this method)?"
] |
https://api.github.com/repos/huggingface/datasets/issues/199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/199/comments | https://api.github.com/repos/huggingface/datasets/issues/199/events | https://github.com/huggingface/datasets/pull/199 | 625,217,440 | MDExOlB1bGxSZXF1ZXN0NDIzNDc4ODIx | 199 | Fix GermEval 2014 dataset infos | {
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stefan-it",
"id": 20651387,
"login": "stefan-it",
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stefan-it"
} | [] | closed | false | null | [] | null | 2 | "2020-05-26T21:41:44Z" | "2020-05-26T21:50:24Z" | "2020-05-26T21:50:24Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/199.diff",
"html_url": "https://github.com/huggingface/datasets/pull/199",
"merged_at": "2020-05-26T21:50:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/199.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/199"
} | Hi,
this PR just removes the `dataset_info.json` file and adds a newly generated `dataset_infos.json` file. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/199/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/199/timeline | null | null | true | [
"Hopefully. this also fixes the dataset view on https://huggingface.co/nlp/viewer/ :)",
"Oh good catch ! This should fix it indeed"
] |
https://api.github.com/repos/huggingface/datasets/issues/198 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/198/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/198/comments | https://api.github.com/repos/huggingface/datasets/issues/198/events | https://github.com/huggingface/datasets/issues/198 | 625,200,627 | MDU6SXNzdWU2MjUyMDA2Mjc= | 198 | Index outside of table length | {
"avatar_url": "https://avatars.githubusercontent.com/u/305717?v=4",
"events_url": "https://api.github.com/users/casajarm/events{/privacy}",
"followers_url": "https://api.github.com/users/casajarm/followers",
"following_url": "https://api.github.com/users/casajarm/following{/other_user}",
"gists_url": "https://api.github.com/users/casajarm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/casajarm",
"id": 305717,
"login": "casajarm",
"node_id": "MDQ6VXNlcjMwNTcxNw==",
"organizations_url": "https://api.github.com/users/casajarm/orgs",
"received_events_url": "https://api.github.com/users/casajarm/received_events",
"repos_url": "https://api.github.com/users/casajarm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/casajarm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/casajarm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/casajarm"
} | [] | closed | false | null | [] | null | 2 | "2020-05-26T21:09:40Z" | "2020-05-26T22:43:49Z" | "2020-05-26T22:43:49Z" | NONE | null | null | null | The offset input box warns of numbers larger than a limit (like 2000) but then the errors start at a smaller value than that limit (like 1955).
> ValueError: Index (2000) outside of table length (2000).
> Traceback:
> File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script
> exec(code, module.__dict__)
> File "/home/sasha/nlp_viewer/run.py", line 116, in <module>
> v = d[item][k]
> File "/home/sasha/.local/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 338, in __getitem__
> output_all_columns=self._output_all_columns,
> File "/home/sasha/.local/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 290, in _getitem
> raise ValueError(f"Index ({key}) outside of table length ({self._data.num_rows}).") | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/198/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/198/timeline | null | completed | false | [
"Sounds like something related to the nlp viewer @srush ",
"Fixed. "
] |
https://api.github.com/repos/huggingface/datasets/issues/197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/197/comments | https://api.github.com/repos/huggingface/datasets/issues/197/events | https://github.com/huggingface/datasets/issues/197 | 624,966,904 | MDU6SXNzdWU2MjQ5NjY5MDQ= | 197 | Scientific Papers only downloading Pubmed | {
"avatar_url": "https://avatars.githubusercontent.com/u/17463361?v=4",
"events_url": "https://api.github.com/users/antmarakis/events{/privacy}",
"followers_url": "https://api.github.com/users/antmarakis/followers",
"following_url": "https://api.github.com/users/antmarakis/following{/other_user}",
"gists_url": "https://api.github.com/users/antmarakis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/antmarakis",
"id": 17463361,
"login": "antmarakis",
"node_id": "MDQ6VXNlcjE3NDYzMzYx",
"organizations_url": "https://api.github.com/users/antmarakis/orgs",
"received_events_url": "https://api.github.com/users/antmarakis/received_events",
"repos_url": "https://api.github.com/users/antmarakis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/antmarakis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antmarakis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/antmarakis"
} | [] | closed | false | null | [] | null | 3 | "2020-05-26T15:18:47Z" | "2020-05-28T08:19:28Z" | "2020-05-28T08:19:28Z" | NONE | null | null | null | Hi!
I have been playing around with this module, and I am a bit confused about the `scientific_papers` dataset. I thought that it would download two separate datasets, arxiv and pubmed. But when I run the following:
```
dataset = nlp.load_dataset('scientific_papers', data_dir='.', cache_dir='.')
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.05k/5.05k [00:00<00:00, 2.66MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.90k/4.90k [00:00<00:00, 2.42MB/s]
Downloading and preparing dataset scientific_papers/pubmed (download: 4.20 GiB, generated: 2.33 GiB, total: 6.53 GiB) to ./scientific_papers/pubmed/1.1.1...
Downloading: 3.62GB [00:40, 90.5MB/s]
Downloading: 880MB [00:08, 101MB/s]
Dataset scientific_papers downloaded and prepared to ./scientific_papers/pubmed/1.1.1. Subsequent calls will reuse this data.
```
only a pubmed folder is created. There doesn't seem to be something for arxiv. Are these two datasets merged? Or have I misunderstood something?
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/197/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/197/timeline | null | completed | false | [
"Hi so there are indeed two configurations in the datasets as you can see [here](https://github.com/huggingface/nlp/blob/master/datasets/scientific_papers/scientific_papers.py#L81-L82).\r\n\r\nYou can load either one with:\r\n```python\r\ndataset = nlp.load_dataset('scientific_papers', 'pubmed')\r\ndataset = nlp.load_dataset('scientific_papers', 'arxiv')\r\n```\r\n\r\nThis issues is actually related to a similar user-experience issue with GLUE. When several configurations are available and the first configuration is loaded by default (see issue #152 and #130), it seems to be unexpected for users.\r\n\r\nI think we should maybe raise a (very explicit) error when there are several configurations available and the user doesn't specify one.\r\n\r\nWhat do you think @lhoestq @patrickvonplaten @mariamabarham ?",
"Yes, it looks like the right thing to do ",
"Now if you don't specify which part you want, it raises an error:\r\n```\r\nValueError: Config name is missing.\r\nPlease pick one among the available configs: ['pubmed', 'arxiv']\r\nExample of usage:\r\n\t`load_dataset('scientific_papers', 'pubmed')`\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/196/comments | https://api.github.com/repos/huggingface/datasets/issues/196/events | https://github.com/huggingface/datasets/pull/196 | 624,901,266 | MDExOlB1bGxSZXF1ZXN0NDIzMjIwMjIw | 196 | Check invalid config name | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 13 | "2020-05-26T13:52:51Z" | "2020-05-26T21:04:56Z" | "2020-05-26T21:04:55Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/196.diff",
"html_url": "https://github.com/huggingface/datasets/pull/196",
"merged_at": "2020-05-26T21:04:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/196.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/196"
} | As said in #194, we should raise an error if the config name has bad characters.
Bad characters are those that are not allowed for directory names on windows. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/196/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/196/timeline | null | null | true | [
"I think that's not related to the config name but the filenames in the dummy data. Mostly it occurs with files downloaded from drive. In that case the dummy file name is extracted from the google drive link and it corresponds to what comes after `https://drive.google.com/`\r\n\r\n",
"> I think that's not related to the config name but the filenames in the dummy data. Mostly it occurs with files downloaded from drive. In that case the dummy file name is extracted from the google drive link and it corresponds to what comes after `https://drive.google.com/`\r\n\r\nThe filenames of the dummy data are now encoded (see #173). So this is not a problem anymore.\r\n\r\nThe problem here is different and comes from the directory names where we save the arrow files (basically `dataset_name/config_name/version`). In this case we could have invalid directory names because of the config name\r\n",
"Okay great then.",
"I like the method, but I'm wondering whether it should just be a test method instead of a `__post_init__` function. From a logical point of view the only reason this error would be thrown is because of an invalid config name introduced when creating the dataset script / adding a new dataset => so I think it might be better to write a simple test for this in `test_dataset_common.py`...what do you think @lhoestq ?",
"`test_dataset_common.py` only tests canonical datasets no ? What if users wants to create their own script ?",
"> `test_dataset_common.py` only tests canonical datasets no ? What if users wants to create their own script ?\r\n\r\nIt tests all dataset that can be loaded either locally or on AWS (which includes all non-canonical datasets as well)...by their own script you mean like a private dataset script that they don't want to be public? I guess even then they could locally run the test functions to check...",
"We could have a bunch of simple consistency tests that run before uploading with the CLI (without loading data if we don't want to force the user to have dummy data)?",
"Let's say someone want to create his own private script. As the script is not meant to be shared, it's not going to be placed in `/datasets` right ? Maybe the script is going to be inside another project. If I'm not wrong in this case the `test_dataset_common.py` is not going to test his script.\r\n\r\nRaising an error in the post init is a sanity check that would tell the user immediately what's wrong.\r\nThe error is raised if he tried to load the script or if he uses `nlp-cli test`",
"> Let's say someone want to create his own private script. As the script is not meant to be shared, it's not going to be placed in `/datasets` right ? Maybe the script is going to be inside another project. If I'm not wrong in this case the `test_dataset_common.py` is not going to test his script.\r\n> \r\n> Raising an error in the post init is a sanity check that would tell the user immediately what's wrong.\r\n> The error is raised if he tried to load the script or if he uses `nlp-cli test`\r\n\r\nOK, fair point! I'm good with this then :-) ",
"I'm fine with this as well (even though I understand what you meant @patrickvonplaten, we can still change it later if needed)",
"> We could have a bunch of simple consistency tests that run before uploading with the CLI (without loading data if we don't want to force the user to have dummy data)?\r\n\r\nYes! I guess that's a big question whether we should force the user to add dummy data. It's probably too tedious for the user...so when uploading to circle ci should we just check \r\n- 1) All configs can be instantiated (if there are any)\r\n- 2) The BuilderClass can be instantiated ... \r\n- 3) ... maybe some more\r\n\r\nand maybe suggest to the user to add dummy data using the dummy data command?",
"I really like that we have a test with dummy data for canonical datasets. This is insurance that they'll keep working in the long run. \r\n\r\nOn the other hand I understand that we will probably not force this practice for scripts uploaded on S3 by a user under his namespace (non-canonical), as it is tedious. As I understand right now the test is done for all the datasets on aws, even the non-canonical ? We should think about different tests for non-canonical datasets.\r\n\r\nI also like the idea of a simple consistency test !",
"Merging this one for now, we can think about the test for non-canonical datasets later"
] |
https://api.github.com/repos/huggingface/datasets/issues/195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/195/comments | https://api.github.com/repos/huggingface/datasets/issues/195/events | https://github.com/huggingface/datasets/pull/195 | 624,858,686 | MDExOlB1bGxSZXF1ZXN0NDIzMTg1NTAy | 195 | [Dummy data command] add new case to command | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 1 | "2020-05-26T12:50:47Z" | "2020-05-26T14:38:28Z" | "2020-05-26T14:38:27Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/195.diff",
"html_url": "https://github.com/huggingface/datasets/pull/195",
"merged_at": "2020-05-26T14:38:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/195.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/195"
} | Qanta: #194 introduces a case that was not noticed before. This change in code helps community users to have an easier time creating the dummy data. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/195/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/195/timeline | null | null | true | [
"@lhoestq - tiny change in the dummy data command, should be good to merge."
] |
https://api.github.com/repos/huggingface/datasets/issues/194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/194/comments | https://api.github.com/repos/huggingface/datasets/issues/194/events | https://github.com/huggingface/datasets/pull/194 | 624,854,897 | MDExOlB1bGxSZXF1ZXN0NDIzMTgyNDM5 | 194 | Add Dataset: Qanta | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 3 | "2020-05-26T12:44:35Z" | "2020-05-26T16:58:17Z" | "2020-05-26T13:16:20Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/194.diff",
"html_url": "https://github.com/huggingface/datasets/pull/194",
"merged_at": "2020-05-26T13:16:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/194.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/194"
} | Fixes dummy data for #169 @EntilZha | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/194/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/194/timeline | null | null | true | [
"@lhoestq - the config name is rather special here: *E.g.* `mode=first,char_skip=25`. It includes `=` and `,` - will that be a problem for windows folders, you think? \r\n\r\nApart from that good to merge for me.",
"It's ok to have `=` and `,`.\r\nWindows doesn't like things like `?`, `:`, `/` etc.\r\n\r\nI'll add some lines to raise an error if the config name is invalid.",
"Thanks for fixing things up! I'm curious to take a look at the zip files now to know the format for future reference."
] |
https://api.github.com/repos/huggingface/datasets/issues/193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/193/comments | https://api.github.com/repos/huggingface/datasets/issues/193/events | https://github.com/huggingface/datasets/issues/193 | 624,655,558 | MDU6SXNzdWU2MjQ2NTU1NTg= | 193 | [Tensorflow] Use something else than `from_tensor_slices()` | {
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/astariul",
"id": 43774355,
"login": "astariul",
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"repos_url": "https://api.github.com/users/astariul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/astariul"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 7 | "2020-05-26T07:19:14Z" | "2020-10-27T15:28:11Z" | "2020-10-27T15:28:11Z" | NONE | null | null | null | In the example notebook, the TF Dataset is built using `from_tensor_slices()` :
```python
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions']
train_tf_dataset.set_format(type='tensorflow', columns=columns)
features = {x: train_tf_dataset[x] for x in columns[:3]}
labels = {"output_1": train_tf_dataset["start_positions"]}
labels["output_2"] = train_tf_dataset["end_positions"]
tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
```
But according to [official tensorflow documentation](https://www.tensorflow.org/guide/data#consuming_numpy_arrays), this will load the entire dataset to memory.
**This defeats one purpose of this library, which is lazy loading.**
Is there any other way to load the `nlp` dataset into TF dataset lazily ?
---
For example, is it possible to use [Arrow dataset](https://www.tensorflow.org/io/api_docs/python/tfio/arrow/ArrowDataset) ? If yes, is there any code example ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/193/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/193/timeline | null | completed | false | [
"I guess we can use `tf.data.Dataset.from_generator` instead. I'll give it a try.",
"Is `tf.data.Dataset.from_generator` working on TPU ?",
"`from_generator` is not working on TPU, I met the following error :\r\n\r\n```\r\nFile \"/usr/local/lib/python3.6/contextlib.py\", line 88, in __exit__\r\n next(self.gen)\r\n File \"/home/usr/.venv/bart/lib/python3.6/site-packages/tensorflow_core/python/eager/context.py\", line 1900, in execution_mode\r\n executor_new.wait()\r\n File \"/home/usr/.venv/bart/lib/python3.6/site-packages/tensorflow_core/python/eager/executor.py\", line 67, in wait\r\n pywrap_tensorflow.TFE_ExecutorWaitForAllPendingNodes(self._handle)\r\ntensorflow.python.framework.errors_impl.NotFoundError: No registered 'PyFunc' OpKernel for 'CPU' devices compatible with node {{node PyFunc}}\r\n . Registered: <no registered kernels>\r\n\r\n [[PyFunc]]\r\n```\r\n\r\n---\r\n\r\n@lhoestq It seems you merged some changes that allow lazy-loading. **Can you give an example of how to use ?** Maybe the Colab notebook should be updated with this method as well.",
"Could you send me the code you used to run create the dataset using `.from_generator` ? What version of tensorflow are you using ?",
"I'm using TF2.2\r\n\r\nHere is my code :\r\n```\r\nimport nlp\r\nfrom transformers import BartTokenizer\r\n\r\ntokenizer = BartTokenizer.from_pretrained('bart-large')\r\n\r\ndef encode(sample):\r\n article_inputs = tokenizer.encode_plus(sample[\"article\"], max_length=tokenizer.model_max_length, pad_to_max_length=True)\r\n summary_inputs = tokenizer.encode_plus(sample[\"highlights\"], max_length=tokenizer.model_max_length, pad_to_max_length=True)\r\n\r\n article_inputs.update({\"lm_labels\": summary_inputs['input_ids']})\r\n return article_inputs\r\n\r\ncnn_dm = nlp.load_dataset('cnn_dailymail', '3.0.0', split='test')\r\ncnn_dm = cnn_dm.map(encode)\r\n\r\ndef gen():\r\n for sample in cnn_dm:\r\n s = {}\r\n s['input_ids'] = sample['input_ids']\r\n s['attention_mask'] = sample['attention_mask']\r\n s['lm_labels'] = sample['lm_labels']\r\n yield s\r\n\r\ndataset = tf.data.Dataset.from_generator(gen, output_types={k: tf.int32 for k in ['input_ids', 'attention_mask', 'lm_labels']}, output_shapes={k: tf.TensorShape([tokenizer.model_max_length]) for k in ['input_ids', 'attention_mask', 'lm_labels']}\r\n```",
"Apparently we'll have to wait for the next tensorflow release to use `.from_generator` and TPU. See https://github.com/tensorflow/tensorflow/issues/34346#issuecomment-598262489",
"Fixed by https://github.com/huggingface/datasets/pull/339"
] |
https://api.github.com/repos/huggingface/datasets/issues/192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/192/comments | https://api.github.com/repos/huggingface/datasets/issues/192/events | https://github.com/huggingface/datasets/issues/192 | 624,397,592 | MDU6SXNzdWU2MjQzOTc1OTI= | 192 | [Question] Create Apache Arrow dataset from raw text file | {
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mrm8488",
"id": 3653789,
"login": "mrm8488",
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mrm8488"
} | [] | closed | false | null | [] | null | 4 | "2020-05-25T16:42:47Z" | "2021-12-18T01:45:34Z" | "2020-10-27T15:20:22Z" | CONTRIBUTOR | null | null | null | Hi guys, I have gathered and preprocessed about 2GB of COVID papers from CORD dataset @ Kggle. I have seen you have a text dataset as "Crime and punishment" in Apache arrow format. Do you have any script to do it from a raw txt file (preprocessed as for BERT like) or any guide?
Is the worth of send it to you and add it to the NLP library?
Thanks, Manu
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/192/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/192/timeline | null | completed | false | [
"We store every dataset in the Arrow format. This is convenient as it supports nested types and memory mapping. If you are curious feel free to check the [pyarrow documentation](https://arrow.apache.org/docs/python/)\r\n\r\nYou can use this library to load your covid papers by creating a dataset script. You can find inspiration from the ones we've already written in `/datasets`. Here is a link to the steps to [add a dataset](https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset)",
"Hello @mrm8488 and @lhoestq \r\n\r\nIs there a way to convert a dataset to Apache arrow format (locally/personal use) & use it before sending it to hugging face?\r\n\r\nThanks :)",
"> Is there a way to convert a dataset to Apache arrow format (locally/personal use) & use it before sending it to hugging face?\r\n\r\nSure, to get a dataset in arrow format you can either:\r\n- [load from local files (txt, json, csv)](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#from-local-files)\r\n- OR [load from python data (dict, pandas)](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#from-in-memory-data)\r\n- OR [create your own dataset script](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#using-a-custom-dataset-loading-script)\r\n",
"> > Is there a way to convert a dataset to Apache arrow format (locally/personal use) & use it before sending it to hugging face?\r\n> \r\n> Sure, to get a dataset in arrow format you can either:\r\n> \r\n> * [load from local files (txt, json, csv)](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#from-local-files)\r\n> \r\n> * OR [load from python data (dict, pandas)](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#from-in-memory-data)\r\n> \r\n> * OR [create your own dataset script](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#using-a-custom-dataset-loading-script)\r\n\r\nLinks were broken. \r\n\r\nUpdated links provided as below\r\n- [load from local files (txt, json, csv)](https://huggingface.co/docs/datasets/loading_datasets.html#from-local-or-remote-files)\r\n- [load from python data (dict, pandas)](https://huggingface.co/docs/datasets/loading_datasets.html#from-in-memory-data)\r\n- [create your own dataset script](https://huggingface.co/docs/datasets/loading_datasets.html#using-a-custom-dataset-loading-script)\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/191/comments | https://api.github.com/repos/huggingface/datasets/issues/191/events | https://github.com/huggingface/datasets/pull/191 | 624,394,936 | MDExOlB1bGxSZXF1ZXN0NDIyODI3MDMy | 191 | [Squad es] add dataset_infos | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 0 | "2020-05-25T16:35:52Z" | "2020-05-25T16:39:59Z" | "2020-05-25T16:39:58Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/191.diff",
"html_url": "https://github.com/huggingface/datasets/pull/191",
"merged_at": "2020-05-25T16:39:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/191.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/191"
} | @mariamabarham - was still about to upload this. Should have waited with my comment a bit more :D | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/191/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/191/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/190/comments | https://api.github.com/repos/huggingface/datasets/issues/190/events | https://github.com/huggingface/datasets/pull/190 | 624,124,600 | MDExOlB1bGxSZXF1ZXN0NDIyNjA4NzAw | 190 | add squad Spanish v1 and v2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 5 | "2020-05-25T08:08:40Z" | "2020-05-25T16:28:46Z" | "2020-05-25T16:28:45Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/190.diff",
"html_url": "https://github.com/huggingface/datasets/pull/190",
"merged_at": "2020-05-25T16:28:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/190.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/190"
} | This PR add the Spanish Squad versions 1 and 2 datasets.
Fixes #164 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/190/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/190/timeline | null | null | true | [
"Nice ! :) \r\nCan we group them into one dataset with two versions, instead of having two datasets ?",
"Yes sure, I can use the version as config name",
"@lhoestq can you check? I grouped them",
"Awesome :) feel free to merge after fixing the test in the CI",
"@mariamabarham - feel free to merge when you're ready. I only checked the dummy files. I did not run the SLOW tests. "
] |
https://api.github.com/repos/huggingface/datasets/issues/189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/189/comments | https://api.github.com/repos/huggingface/datasets/issues/189/events | https://github.com/huggingface/datasets/issues/189 | 624,048,881 | MDU6SXNzdWU2MjQwNDg4ODE= | 189 | [Question] BERT-style multiple choice formatting | {
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sarahwie",
"id": 8027676,
"login": "sarahwie",
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sarahwie"
} | [] | closed | false | null | [] | null | 2 | "2020-05-25T05:11:05Z" | "2020-05-25T18:38:28Z" | "2020-05-25T18:38:28Z" | NONE | null | null | null | Hello, I am wondering what the equivalent formatting of a dataset should be to allow for multiple-choice answering prediction, BERT-style. Previously, this was done by passing a list of `InputFeatures` to the dataloader instead of a list of `InputFeature`, where `InputFeatures` contained lists of length equal to the number of answer choices in the MCQ instead of single items. I'm a bit confused on what the output of my feature conversion function should be when using `dataset.map()` to ensure similar behavior.
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/189/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/189/timeline | null | completed | false | [
"Hi @sarahwie, can you details this a little more?\r\n\r\nI'm not sure I understand what you refer to and what you mean when you say \"Previously, this was done by passing a list of InputFeatures to the dataloader instead of a list of InputFeature\"",
"I think I've resolved it. For others' reference: to convert from using the [`MultipleChoiceDataset` class](https://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/examples/multiple-choice/utils_multiple_choice.py#L82)/[`run_multiple_choice.py`](https://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/examples/multiple-choice/run_multiple_choice.py) script in Huggingface Transformers, I've done the following for hellaswag:\r\n\r\n1. converted the `convert_examples_to_features()` function to only take one input and return a dictionary rather than a list:\r\n```\r\ndef convert_examples_to_features(example, tokenizer, max_length):\r\n\r\n choices_inputs = defaultdict(list)\r\n for ending_idx, ending in enumerate(example['endings']['ending']):\r\n text_a = example['ctx']\r\n text_b = ending\r\n\r\n inputs = tokenizer.encode_plus(\r\n text_a,\r\n text_b,\r\n add_special_tokens=True,\r\n max_length=max_length,\r\n pad_to_max_length=True,\r\n return_overflowing_tokens=True,\r\n )\r\n if \"num_truncated_tokens\" in inputs and inputs[\"num_truncated_tokens\"] > 0:\r\n logger.info(\r\n \"Attention! you are cropping tokens (swag task is ok). \"\r\n \"If you are training ARC and RACE and you are poping question + options,\"\r\n \"you need to try to use a bigger max seq length!\"\r\n )\r\n\r\n for key in inputs:\r\n choices_inputs[key].append(inputs[key])\r\n \r\n choices_inputs['label'] = int(example['label'])\r\n\r\n return choices_inputs\r\n```\r\n2. apply this directly (instance-wise) to dataset, convert dataset to torch tensors. Dataset is then ready to be passed to `Trainer` instance.\r\n\r\n```\r\ndataset['train'] = dataset['train'].map(lambda x: convert_examples_to_features(x, tokenizer, max_length), batched=False)\r\ncolumns = ['input_ids', 'token_type_ids', 'attention_mask', 'label']\r\ndataset['train'].set_format(type='torch', columns=columns)\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/188/comments | https://api.github.com/repos/huggingface/datasets/issues/188/events | https://github.com/huggingface/datasets/issues/188 | 623,890,430 | MDU6SXNzdWU2MjM4OTA0MzA= | 188 | When will the remaining math_dataset modules be added as dataset objects | {
"avatar_url": "https://avatars.githubusercontent.com/u/31251196?v=4",
"events_url": "https://api.github.com/users/tylerroost/events{/privacy}",
"followers_url": "https://api.github.com/users/tylerroost/followers",
"following_url": "https://api.github.com/users/tylerroost/following{/other_user}",
"gists_url": "https://api.github.com/users/tylerroost/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tylerroost",
"id": 31251196,
"login": "tylerroost",
"node_id": "MDQ6VXNlcjMxMjUxMTk2",
"organizations_url": "https://api.github.com/users/tylerroost/orgs",
"received_events_url": "https://api.github.com/users/tylerroost/received_events",
"repos_url": "https://api.github.com/users/tylerroost/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tylerroost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tylerroost/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tylerroost"
} | [] | closed | false | null | [] | null | 3 | "2020-05-24T15:46:52Z" | "2020-05-24T18:53:48Z" | "2020-05-24T18:53:48Z" | NONE | null | null | null | Currently only the algebra_linear_1d is supported. Is there a timeline for making the other modules supported. If no timeline is established, how can I help? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/188/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/188/timeline | null | completed | false | [
"On a similar note it would be nice to differentiate between train-easy, train-medium, and train-hard",
"Hi @tylerroost, we don't have a timeline for this at the moment.\r\nIf you want to give it a look we would be happy to review a PR on it.\r\nAlso, the library is one week old so everything is quite barebones, in particular the doc.\r\nYou should expect some bumps on the road.\r\n\r\nTo get you started, you can check the datasets scripts in the `./datasets` folder on the repo and find the one on math_datasets that will need to be modified. Then you should check the original repository on the math_dataset to see where the other files to download are located and what is the expected format for the various parts of the dataset.\r\n\r\nTo get a general overview on how datasets scripts are written and used, you can read the nice tutorial on how to add a new dataset for TensorFlow Dataset [here](https://www.tensorflow.org/datasets/add_dataset), our API is not exactly identical but it can give you a high-level overview.",
"Thanks I'll give it a look"
] |
https://api.github.com/repos/huggingface/datasets/issues/187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/187/comments | https://api.github.com/repos/huggingface/datasets/issues/187/events | https://github.com/huggingface/datasets/issues/187 | 623,627,800 | MDU6SXNzdWU2MjM2Mjc4MDA= | 187 | [Question] How to load wikipedia ? Beam runner ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | null | [] | null | 2 | "2020-05-23T10:18:52Z" | "2020-05-25T00:12:02Z" | "2020-05-25T00:12:02Z" | CONTRIBUTOR | null | null | null | When `nlp.load_dataset('wikipedia')`, I got
* `WARNING:nlp.builder:Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided. Please pass a nlp.DownloadConfig(beam_runner=...) object to the builder.download_and_prepare(download_config=...) method. Default values will be used.`
* `AttributeError: 'NoneType' object has no attribute 'size'`
Could somebody tell me what should I do ?
# Env
On Colab,
```
git clone https://github.com/huggingface/nlp
cd nlp
pip install -q .
```
```
%pip install -q apache_beam mwparserfromhell
-> ERROR: pydrive 1.3.1 has requirement oauth2client>=4.0.0, but you'll have oauth2client 3.0.0 which is incompatible.
ERROR: google-api-python-client 1.7.12 has requirement httplib2<1dev,>=0.17.0, but you'll have httplib2 0.12.0 which is incompatible.
ERROR: chainer 6.5.0 has requirement typing-extensions<=3.6.6, but you'll have typing-extensions 3.7.4.2 which is incompatible.
```
```
pip install -q apache-beam[interactive]
ERROR: google-colab 1.0.0 has requirement ipython~=5.5.0, but you'll have ipython 5.10.0 which is incompatible.
```
# The whole message
```
WARNING:nlp.builder:Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided. Please pass a nlp.DownloadConfig(beam_runner=...) object to the builder.download_and_prepare(download_config=...) method. Default values will be used.
Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wikipedia/20200501.aa/1.0.0...
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
44 frames
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker.invoke_process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window()
/usr/local/lib/python3.6/dist-packages/apache_beam/io/iobase.py in process(self, element, init_result)
1081 writer.write(e)
-> 1082 return [window.TimestampedValue(writer.close(), timestamp.MAX_TIMESTAMP)]
1083
/usr/local/lib/python3.6/dist-packages/apache_beam/io/filebasedsink.py in close(self)
422 def close(self):
--> 423 self.sink.close(self.temp_handle)
424 return self.temp_shard_path
/usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in close(self, writer)
537 if len(self._buffer[0]) > 0:
--> 538 self._flush_buffer()
539 if self._record_batches_byte_size > 0:
/usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in _flush_buffer(self)
569 for b in x.buffers():
--> 570 size = size + b.size
571 self._record_batches_byte_size = self._record_batches_byte_size + size
AttributeError: 'NoneType' object has no attribute 'size'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
<ipython-input-9-340aabccefff> in <module>()
----> 1 dset = nlp.load_dataset('wikipedia')
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
518 download_mode=download_mode,
519 ignore_verifications=ignore_verifications,
--> 520 save_infos=save_infos,
521 )
522
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)
370 verify_infos = not save_infos and not ignore_verifications
371 self._download_and_prepare(
--> 372 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
373 )
374 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
770 with beam.Pipeline(runner=beam_runner, options=beam_options,) as pipeline:
771 super(BeamBasedBuilder, self)._download_and_prepare(
--> 772 dl_manager, pipeline=pipeline, verify_infos=False
773 ) # TODO{beam} verify infos
774
/usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in __exit__(self, exc_type, exc_val, exc_tb)
501 def __exit__(self, exc_type, exc_val, exc_tb):
502 if not exc_type:
--> 503 self.run().wait_until_finish()
504
505 def visit(self, visitor):
/usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in run(self, test_runner_api)
481 return Pipeline.from_runner_api(
482 self.to_runner_api(use_fake_coders=True), self.runner,
--> 483 self._options).run(False)
484
485 if self._options.view_as(TypeOptions).runtime_type_check:
/usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in run(self, test_runner_api)
494 finally:
495 shutil.rmtree(tmpdir)
--> 496 return self.runner.run_pipeline(self, self._options)
497
498 def __enter__(self):
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/direct/direct_runner.py in run_pipeline(self, pipeline, options)
128 runner = BundleBasedDirectRunner()
129
--> 130 return runner.run_pipeline(pipeline, options)
131
132
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_pipeline(self, pipeline, options)
553
554 self._latest_run_result = self.run_via_runner_api(
--> 555 pipeline.to_runner_api(default_environment=self._default_environment))
556 return self._latest_run_result
557
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_via_runner_api(self, pipeline_proto)
563 # TODO(pabloem, BEAM-7514): Create a watermark manager (that has access to
564 # the teststream (if any), and all the stages).
--> 565 return self.run_stages(stage_context, stages)
566
567 @contextlib.contextmanager
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_stages(self, stage_context, stages)
704 stage,
705 pcoll_buffers,
--> 706 stage_context.safe_coders)
707 metrics_by_stage[stage.name] = stage_results.process_bundle.metrics
708 monitoring_infos_by_stage[stage.name] = (
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in _run_stage(self, worker_handler_factory, pipeline_components, stage, pcoll_buffers, safe_coders)
1071 cache_token_generator=cache_token_generator)
1072
-> 1073 result, splits = bundle_manager.process_bundle(data_input, data_output)
1074
1075 def input_for(transform_id, input_id):
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in process_bundle(self, inputs, expected_outputs)
2332
2333 with UnboundedThreadPoolExecutor() as executor:
-> 2334 for result, split_result in executor.map(execute, part_inputs):
2335
2336 split_result_list += split_result
/usr/lib/python3.6/concurrent/futures/_base.py in result_iterator()
584 # Careful not to keep a reference to the popped future
585 if timeout is None:
--> 586 yield fs.pop().result()
587 else:
588 yield fs.pop().result(end_time - time.monotonic())
/usr/lib/python3.6/concurrent/futures/_base.py in result(self, timeout)
430 raise CancelledError()
431 elif self._state == FINISHED:
--> 432 return self.__get_result()
433 else:
434 raise TimeoutError()
/usr/lib/python3.6/concurrent/futures/_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
--> 384 raise self._exception
385 else:
386 return self._result
/usr/local/lib/python3.6/dist-packages/apache_beam/utils/thread_pool_executor.py in run(self)
42 # If the future wasn't cancelled, then attempt to execute it.
43 try:
---> 44 self._future.set_result(self._fn(*self._fn_args, **self._fn_kwargs))
45 except BaseException as exc:
46 # Even though Python 2 futures library has #set_exection(),
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in execute(part_map)
2329 self._registered,
2330 cache_token_generator=self._cache_token_generator)
-> 2331 return bundle_manager.process_bundle(part_map, expected_outputs)
2332
2333 with UnboundedThreadPoolExecutor() as executor:
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in process_bundle(self, inputs, expected_outputs)
2243 process_bundle_descriptor_id=self._bundle_descriptor.id,
2244 cache_tokens=[next(self._cache_token_generator)]))
-> 2245 result_future = self._worker_handler.control_conn.push(process_bundle_req)
2246
2247 split_results = [] # type: List[beam_fn_api_pb2.ProcessBundleSplitResponse]
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in push(self, request)
1557 self._uid_counter += 1
1558 request.instruction_id = 'control_%s' % self._uid_counter
-> 1559 response = self.worker.do_instruction(request)
1560 return ControlFuture(request.instruction_id, response)
1561
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in do_instruction(self, request)
413 # E.g. if register is set, this will call self.register(request.register))
414 return getattr(self, request_type)(
--> 415 getattr(request, request_type), request.instruction_id)
416 else:
417 raise NotImplementedError
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in process_bundle(self, request, instruction_id)
448 with self.maybe_profile(instruction_id):
449 delayed_applications, requests_finalization = (
--> 450 bundle_processor.process_bundle(instruction_id))
451 monitoring_infos = bundle_processor.monitoring_infos()
452 monitoring_infos.extend(self.state_cache_metrics_fn())
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/bundle_processor.py in process_bundle(self, instruction_id)
837 for data in data_channel.input_elements(instruction_id,
838 expected_transforms):
--> 839 input_op_by_transform_id[data.transform_id].process_encoded(data.data)
840
841 # Finish all operations.
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/bundle_processor.py in process_encoded(self, encoded_windowed_values)
214 decoded_value = self.windowed_coder_impl.decode_from_stream(
215 input_stream, True)
--> 216 self.output(decoded_value)
217
218 def try_split(self, fraction_of_remainder, total_buffer_size):
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.Operation.output()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.Operation.output()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()
/usr/local/lib/python3.6/dist-packages/future/utils/__init__.py in raise_with_traceback(exc, traceback)
417 if traceback == Ellipsis:
418 _, _, traceback = sys.exc_info()
--> 419 raise exc.with_traceback(traceback)
420
421 else:
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker.invoke_process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window()
/usr/local/lib/python3.6/dist-packages/apache_beam/io/iobase.py in process(self, element, init_result)
1080 for e in bundle[1]: # values
1081 writer.write(e)
-> 1082 return [window.TimestampedValue(writer.close(), timestamp.MAX_TIMESTAMP)]
1083
1084
/usr/local/lib/python3.6/dist-packages/apache_beam/io/filebasedsink.py in close(self)
421
422 def close(self):
--> 423 self.sink.close(self.temp_handle)
424 return self.temp_shard_path
/usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in close(self, writer)
536 def close(self, writer):
537 if len(self._buffer[0]) > 0:
--> 538 self._flush_buffer()
539 if self._record_batches_byte_size > 0:
540 self._write_batches(writer)
/usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in _flush_buffer(self)
568 for x in arrays:
569 for b in x.buffers():
--> 570 size = size + b.size
571 self._record_batches_byte_size = self._record_batches_byte_size + size
AttributeError: 'NoneType' object has no attribute 'size' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles']
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/187/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/187/timeline | null | completed | false | [
"I have seen that somebody is hard working on easierly loadable wikipedia. #129 \r\nMaybe I should wait a few days for that version ?",
"Yes we (well @lhoestq) are very actively working on this."
] |
https://api.github.com/repos/huggingface/datasets/issues/186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/186/comments | https://api.github.com/repos/huggingface/datasets/issues/186/events | https://github.com/huggingface/datasets/issues/186 | 623,595,180 | MDU6SXNzdWU2MjM1OTUxODA= | 186 | Weird-ish: Not creating unique caches for different phases | {
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"events_url": "https://api.github.com/users/zphang/events{/privacy}",
"followers_url": "https://api.github.com/users/zphang/followers",
"following_url": "https://api.github.com/users/zphang/following{/other_user}",
"gists_url": "https://api.github.com/users/zphang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zphang",
"id": 1668462,
"login": "zphang",
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"organizations_url": "https://api.github.com/users/zphang/orgs",
"received_events_url": "https://api.github.com/users/zphang/received_events",
"repos_url": "https://api.github.com/users/zphang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zphang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zphang"
} | [] | closed | false | null | [] | null | 2 | "2020-05-23T06:40:58Z" | "2020-05-23T20:22:18Z" | "2020-05-23T20:22:17Z" | NONE | null | null | null | Sample code:
```python
import nlp
dataset = nlp.load_dataset('boolq')
def func1(x):
return x
def func2(x):
return None
train_output = dataset["train"].map(func1)
valid_output = dataset["validation"].map(func1)
print()
print(len(train_output), len(valid_output))
# Output: 9427 9427
```
The map method in both cases seem to be pointing to the same cache, so the latter call based on the validation data will return the processed train data cache.
What's weird is that the following doesn't seem to be an issue:
```python
train_output = dataset["train"].map(func2)
valid_output = dataset["validation"].map(func2)
print()
print(len(train_output), len(valid_output))
# 9427 3270
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/186/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/186/timeline | null | completed | false | [
"Looks like a duplicate of #120.\r\nThis is already fixed on master. We'll do a new release on pypi soon",
"Good catch, it looks fixed.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/185/comments | https://api.github.com/repos/huggingface/datasets/issues/185/events | https://github.com/huggingface/datasets/pull/185 | 623,172,484 | MDExOlB1bGxSZXF1ZXN0NDIxODkxNjY2 | 185 | [Commands] In-detail instructions to create dummy data folder | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 1 | "2020-05-22T12:26:25Z" | "2020-05-22T14:06:35Z" | "2020-05-22T14:06:34Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/185.diff",
"html_url": "https://github.com/huggingface/datasets/pull/185",
"merged_at": "2020-05-22T14:06:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/185.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/185"
} | ### Dummy data command
This PR adds a new command `python nlp-cli dummy_data <path_to_dataset_folder>` that gives in-detail instructions on how to add the dummy data files.
It would be great if you can try it out by moving the current dummy_data folder of any dataset in `./datasets` with `mv datasets/<dataset_script>/dummy_data datasets/<dataset_name>/dummy_data_copy` and running the command `python nlp-cli dummy_data ./datasets/<dataset_name>` to see if you like the instructions.
### CONTRIBUTING.md
Also the CONTRIBUTING.md is made cleaner including a new section on "How to add a dataset".
### Current PRs
It would be nice if we can try out if this command helps current PRs, *e.g.* #169 to add a dataset. I comment on those PRs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/185/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/185/timeline | null | null | true | [
"awesome !"
] |
https://api.github.com/repos/huggingface/datasets/issues/184 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/184/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/184/comments | https://api.github.com/repos/huggingface/datasets/issues/184/events | https://github.com/huggingface/datasets/pull/184 | 623,120,929 | MDExOlB1bGxSZXF1ZXN0NDIxODQ5MTQ3 | 184 | Use IndexError instead of ValueError when index out of range | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | null | [] | null | 0 | "2020-05-22T10:43:42Z" | "2020-05-28T08:31:18Z" | "2020-05-28T08:31:18Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/184.diff",
"html_url": "https://github.com/huggingface/datasets/pull/184",
"merged_at": "2020-05-28T08:31:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/184.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/184"
} | **`default __iter__ needs IndexError`**.
When I want to create a wrapper of arrow dataset to adapt to fastai,
I don't know how to initialize it, so I didn't use inheritance but use object composition.
I wrote sth like this.
```
clas HF_dataset():
def __init__(self, arrow_dataset):
self.dset = arrow_dataset
def __getitem__(self, i):
return self.my_get_item(self.dset)
```
But `for sample in my_dataset:` gave me `ValueError(f"Index ({key}) outside of table length ({self._data.num_rows}).")` . This is because default `__iter__` will stop when it catched `IndexError`.
You can also see my [work](https://github.com/richardyy1188/Pretrain-MLM-and-finetune-on-GLUE-with-fastai/blob/master/GLUE_with_fastai.ipynb) that uses fastai2 to show/load batches from huggingface/nlp GLUE datasets
So I hope we can use `IndexError` instead to let other people who want to wrap it for any purpose won't be caught by this caveat.
BTW, I super appreciate your work, both transformers and nlp save my life. 💖💖💖💖💖💖💖
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/184/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/184/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/183/comments | https://api.github.com/repos/huggingface/datasets/issues/183/events | https://github.com/huggingface/datasets/issues/183 | 623,054,270 | MDU6SXNzdWU2MjMwNTQyNzA= | 183 | [Bug] labels of glue/ax are all -1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 2 | "2020-05-22T08:43:36Z" | "2020-05-22T22:14:05Z" | "2020-05-22T22:14:05Z" | CONTRIBUTOR | null | null | null | ```
ax = nlp.load_dataset('glue', 'ax')
for i in range(30): print(ax['test'][i]['label'], end=', ')
```
```
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/183/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/183/timeline | null | completed | false | [
"This is the test set given by the Glue benchmark. The labels are not provided, and therefore set to -1.",
"Ah, yeah. Why it didn’t occur to me. 😂\nThank you for your comment."
] |
https://api.github.com/repos/huggingface/datasets/issues/182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/182/comments | https://api.github.com/repos/huggingface/datasets/issues/182/events | https://github.com/huggingface/datasets/pull/182 | 622,646,770 | MDExOlB1bGxSZXF1ZXN0NDIxNDcxMjg4 | 182 | Update newsroom.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/3289873?v=4",
"events_url": "https://api.github.com/users/yoavartzi/events{/privacy}",
"followers_url": "https://api.github.com/users/yoavartzi/followers",
"following_url": "https://api.github.com/users/yoavartzi/following{/other_user}",
"gists_url": "https://api.github.com/users/yoavartzi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yoavartzi",
"id": 3289873,
"login": "yoavartzi",
"node_id": "MDQ6VXNlcjMyODk4NzM=",
"organizations_url": "https://api.github.com/users/yoavartzi/orgs",
"received_events_url": "https://api.github.com/users/yoavartzi/received_events",
"repos_url": "https://api.github.com/users/yoavartzi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yoavartzi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoavartzi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yoavartzi"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
] | null | 0 | "2020-05-21T17:07:43Z" | "2020-05-22T16:38:23Z" | "2020-05-22T16:38:23Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/182.diff",
"html_url": "https://github.com/huggingface/datasets/pull/182",
"merged_at": "2020-05-22T16:38:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/182.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/182"
} | Updated the URL for Newsroom download so it's more robust to future changes. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/182/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/182/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/181/comments | https://api.github.com/repos/huggingface/datasets/issues/181/events | https://github.com/huggingface/datasets/issues/181 | 622,634,420 | MDU6SXNzdWU2MjI2MzQ0MjA= | 181 | Cannot upload my own dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/3155646?v=4",
"events_url": "https://api.github.com/users/korakot/events{/privacy}",
"followers_url": "https://api.github.com/users/korakot/followers",
"following_url": "https://api.github.com/users/korakot/following{/other_user}",
"gists_url": "https://api.github.com/users/korakot/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/korakot",
"id": 3155646,
"login": "korakot",
"node_id": "MDQ6VXNlcjMxNTU2NDY=",
"organizations_url": "https://api.github.com/users/korakot/orgs",
"received_events_url": "https://api.github.com/users/korakot/received_events",
"repos_url": "https://api.github.com/users/korakot/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/korakot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/korakot/subscriptions",
"type": "User",
"url": "https://api.github.com/users/korakot"
} | [] | closed | false | null | [] | null | 6 | "2020-05-21T16:45:52Z" | "2020-06-18T22:14:42Z" | "2020-06-18T22:14:42Z" | NONE | null | null | null | I look into `nlp-cli` and `user.py` to learn how to upload my own data.
It is supposed to work like this
- Register to get username, password at huggingface.co
- `nlp-cli login` and type username, passworld
- I have a single file to upload at `./ttc/ttc_freq_extra.csv`
- `nlp-cli upload ttc/ttc_freq_extra.csv`
But I got this error.
```
2020-05-21 16:33:52.722464: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
About to upload file /content/ttc/ttc_freq_extra.csv to S3 under filename ttc/ttc_freq_extra.csv and namespace korakot
Proceed? [Y/n] y
Uploading... This might take a while if files are large
Traceback (most recent call last):
File "/usr/local/bin/nlp-cli", line 33, in <module>
service.run()
File "/usr/local/lib/python3.6/dist-packages/nlp/commands/user.py", line 234, in run
token=token, filename=filename, filepath=filepath, organization=self.args.organization
File "/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py", line 141, in presign_and_upload
urls = self.presign(token, filename=filename, organization=organization)
File "/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py", line 132, in presign
return PresignedUrl(**d)
TypeError: __init__() got an unexpected keyword argument 'cdn'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/181/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/181/timeline | null | completed | false | [
"It's my misunderstanding. I cannot just upload a csv. I need to write a dataset loading script too.",
"I now try with the sample `datasets/csv` folder. \r\n\r\n nlp-cli upload csv\r\n\r\nThe error is still the same\r\n\r\n```\r\n2020-05-21 17:20:56.394659: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\nAbout to upload file /content/csv/csv.py to S3 under filename csv/csv.py and namespace korakot\r\nAbout to upload file /content/csv/dummy/0.0.0/dummy_data.zip to S3 under filename csv/dummy/0.0.0/dummy_data.zip and namespace korakot\r\nProceed? [Y/n] y\r\nUploading... This might take a while if files are large\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/nlp-cli\", line 33, in <module>\r\n service.run()\r\n File \"/usr/local/lib/python3.6/dist-packages/nlp/commands/user.py\", line 234, in run\r\n token=token, filename=filename, filepath=filepath, organization=self.args.organization\r\n File \"/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py\", line 141, in presign_and_upload\r\n urls = self.presign(token, filename=filename, organization=organization)\r\n File \"/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py\", line 132, in presign\r\n return PresignedUrl(**d)\r\nTypeError: __init__() got an unexpected keyword argument 'cdn'\r\n```\r\n",
"We haven't tested the dataset upload feature yet cc @julien-c \r\nThis is on our short/mid-term roadmap though",
"Even if I fix the `TypeError: __init__() got an unexpected keyword argument 'cdn'` error, it looks like it still uploads to `https://s3.amazonaws.com/models.huggingface.co/bert/<namespace>/<dataset_name>` instead of `https://s3.amazonaws.com/datasets.huggingface.co/nlp/<namespace>/<dataset_name>`",
"@lhoestq The endpoints in https://github.com/huggingface/nlp/blob/master/src/nlp/hf_api.py should be (depending on the type of file):\r\n```\r\nPOST /api/datasets/presign\r\nGET /api/datasets/listObjs\r\nDELETE /api/datasets/deleteObj\r\nPOST /api/metrics/presign \r\nGET /api/metrics/listObjs\r\nDELETE /api/metrics/deleteObj\r\n```\r\n\r\nIn addition to this, @thomwolf cleaned up the objects with dataclasses but you should revert this and re-align to the hf_api that's in this branch of transformers: https://github.com/huggingface/transformers/pull/4632 (so that potential new JSON attributes in the API output don't break existing versions of any library)",
"New commands are\r\n```\r\nnlp-cli upload_dataset <path/to/dataset>\r\nnlp-cli upload_metric <path/to/metric>\r\nnlp-cli s3_datasets {rm, ls}\r\nnlp-cli s3_metrics {rm, ls}\r\n```\r\nClosing this issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/180/comments | https://api.github.com/repos/huggingface/datasets/issues/180/events | https://github.com/huggingface/datasets/pull/180 | 622,556,861 | MDExOlB1bGxSZXF1ZXN0NDIxMzk5Nzg2 | 180 | Add hall of fame | {
"avatar_url": "https://avatars.githubusercontent.com/u/821155?v=4",
"events_url": "https://api.github.com/users/clmnt/events{/privacy}",
"followers_url": "https://api.github.com/users/clmnt/followers",
"following_url": "https://api.github.com/users/clmnt/following{/other_user}",
"gists_url": "https://api.github.com/users/clmnt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clmnt",
"id": 821155,
"login": "clmnt",
"node_id": "MDQ6VXNlcjgyMTE1NQ==",
"organizations_url": "https://api.github.com/users/clmnt/orgs",
"received_events_url": "https://api.github.com/users/clmnt/received_events",
"repos_url": "https://api.github.com/users/clmnt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clmnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clmnt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clmnt"
} | [] | closed | false | null | [] | null | 0 | "2020-05-21T14:53:48Z" | "2020-05-22T16:35:16Z" | "2020-05-22T16:35:14Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/180.diff",
"html_url": "https://github.com/huggingface/datasets/pull/180",
"merged_at": "2020-05-22T16:35:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/180.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/180"
} | powered by https://github.com/sourcerer-io/hall-of-fame | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/180/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/180/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/179/comments | https://api.github.com/repos/huggingface/datasets/issues/179/events | https://github.com/huggingface/datasets/issues/179 | 622,525,410 | MDU6SXNzdWU2MjI1MjU0MTA= | 179 | [Feature request] separate split name and split instructions | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 2 | "2020-05-21T14:10:51Z" | "2020-05-22T13:31:08Z" | "2020-05-22T13:31:07Z" | MEMBER | null | null | null | Currently, the name of an nlp.NamedSplit is parsed in arrow_reader.py and used as the instruction.
This makes it impossible to have several training sets, which can occur when:
- A dataset corresponds to a collection of sub-datasets
- A dataset was built in stages, adding new examples at each stage
Would it be possible to have two separate fields in the Split class, a name /instruction and a unique ID that is used as the key in the builder's split_dict ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/179/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/179/timeline | null | completed | false | [
"If your dataset is a collection of sub-datasets, you should probably consider having one config per sub-dataset. For example for Glue, we have sst2, mnli etc.\r\nIf you want to have multiple train sets (for example one per stage). The easiest solution would be to name them `nlp.Split(\"train_stage1\")`, `nlp.Split(\"train_stage2\")`, etc. or something like that.",
"Thanks for the tip! I ended up setting up three different versions of the dataset with their own configs.\r\n\r\nfor the named splits, I was trying with `nlp.Split(\"train-stage1\")`, which fails. Changing to `nlp.Split(\"train_stage1\")` works :) I looked for examples of what works in the code comments, it may be worth adding some examples of valid/invalid names in there?"
] |
https://api.github.com/repos/huggingface/datasets/issues/178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/178/comments | https://api.github.com/repos/huggingface/datasets/issues/178/events | https://github.com/huggingface/datasets/pull/178 | 621,979,849 | MDExOlB1bGxSZXF1ZXN0NDIwOTMyMDI5 | 178 | [Manual data] improve error message for manual data in general | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 0 | "2020-05-20T18:10:45Z" | "2020-05-20T18:18:52Z" | "2020-05-20T18:18:50Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/178.diff",
"html_url": "https://github.com/huggingface/datasets/pull/178",
"merged_at": "2020-05-20T18:18:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/178.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/178"
} | `nlp.load("xsum")` now leads to the following error message:
![Screenshot from 2020-05-20 20-05-28](https://user-images.githubusercontent.com/23423619/82481825-3587ea00-9ad6-11ea-9ca2-5794252c6ac7.png)
I guess the manual download instructions for `xsum` can also be improved. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/178/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/178/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/177/comments | https://api.github.com/repos/huggingface/datasets/issues/177/events | https://github.com/huggingface/datasets/pull/177 | 621,975,368 | MDExOlB1bGxSZXF1ZXN0NDIwOTI4MzE0 | 177 | Xsum manual download instruction | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 0 | "2020-05-20T18:02:41Z" | "2020-05-20T18:16:50Z" | "2020-05-20T18:16:49Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/177.diff",
"html_url": "https://github.com/huggingface/datasets/pull/177",
"merged_at": "2020-05-20T18:16:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/177.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/177"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/177/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/177/timeline | null | null | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/176/comments | https://api.github.com/repos/huggingface/datasets/issues/176/events | https://github.com/huggingface/datasets/pull/176 | 621,934,638 | MDExOlB1bGxSZXF1ZXN0NDIwODkzNDky | 176 | [Tests] Refactor MockDownloadManager | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 0 | "2020-05-20T17:07:36Z" | "2020-05-20T18:17:19Z" | "2020-05-20T18:17:18Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/176.diff",
"html_url": "https://github.com/huggingface/datasets/pull/176",
"merged_at": "2020-05-20T18:17:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/176.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/176"
} | Clean mock download manager class.
The print function was not of much help I think.
We should think about adding a command that creates the dummy folder structure for the user. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/176/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/176/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/175/comments | https://api.github.com/repos/huggingface/datasets/issues/175/events | https://github.com/huggingface/datasets/issues/175 | 621,929,428 | MDU6SXNzdWU2MjE5Mjk0Mjg= | 175 | [Manual data dir] Error message: nlp.load_dataset('xsum') -> TypeError | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
} | [] | closed | false | null | [] | null | 0 | "2020-05-20T17:00:32Z" | "2020-05-20T18:18:50Z" | "2020-05-20T18:18:50Z" | CONTRIBUTOR | null | null | null | v 0.1.0 from pip
```python
import nlp
xsum = nlp.load_dataset('xsum')
```
Issue is `dl_manager.manual_dir`is `None`
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-42-8a32f066f3bd> in <module>
----> 1 xsum = nlp.load_dataset('xsum')
~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
515 download_mode=download_mode,
516 ignore_verifications=ignore_verifications,
--> 517 save_infos=save_infos,
518 )
519
~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)
361 verify_infos = not save_infos and not ignore_verifications
362 self._download_and_prepare(
--> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
364 )
365 # Sync info
~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
397 split_dict = SplitDict(dataset_name=self.name)
398 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 399 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
400 # Checksums verification
401 if verify_infos:
~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/datasets/xsum/5c5fca23aaaa469b7a1c6f095cf12f90d7ab99bcc0d86f689a74fd62634a1472/xsum.py in _split_generators(self, dl_manager)
102 with open(dl_path, "r") as json_file:
103 split_ids = json.load(json_file)
--> 104 downloaded_path = os.path.join(dl_manager.manual_dir, "xsum-extracts-from-downloads")
105 return [
106 nlp.SplitGenerator(
~/miniconda3/envs/nb/lib/python3.7/posixpath.py in join(a, *p)
78 will be discarded. An empty last part will result in a path that
79 ends with a separator."""
---> 80 a = os.fspath(a)
81 sep = _get_sep(a)
82 path = a
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/175/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/175/timeline | null | completed | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/174 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/174/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/174/comments | https://api.github.com/repos/huggingface/datasets/issues/174/events | https://github.com/huggingface/datasets/issues/174 | 621,928,403 | MDU6SXNzdWU2MjE5Mjg0MDM= | 174 | nlp.load_dataset('xsum') -> TypeError | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
} | [] | closed | false | null | [] | null | 0 | "2020-05-20T16:59:09Z" | "2020-05-20T17:43:46Z" | "2020-05-20T17:43:46Z" | CONTRIBUTOR | null | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/174/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/174/timeline | null | completed | false | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/173 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/173/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/173/comments | https://api.github.com/repos/huggingface/datasets/issues/173/events | https://github.com/huggingface/datasets/pull/173 | 621,764,932 | MDExOlB1bGxSZXF1ZXN0NDIwNzUyNzQy | 173 | Rm extracted test dirs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 2 | "2020-05-20T13:30:48Z" | "2020-05-22T16:34:36Z" | "2020-05-22T16:34:35Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/173.diff",
"html_url": "https://github.com/huggingface/datasets/pull/173",
"merged_at": "2020-05-22T16:34:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/173.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/173"
} | All the dummy data used for tests were duplicated. For each dataset, we had one zip file but also its extracted directory. I removed all these directories
Furthermore instead of extracting next to the dummy_data.zip file, we extract in the temp `cached_dir` used for tests, so that all the extracted directories get removed after testing.
Finally there was a bug in the `mock_download_manager` that would let it create directories with invalid names, as in #172. I fixed that by encoding url arguments. I had to rename the dummy data for `scientific_papers` and `cnn_dailymail` (the aws tests don't pass for those 2 in this PR, but they will once aws will be synced, as the local ones do)
Let me know if it sounds good to you @patrickvonplaten . I'm still not entirely familiar with the mock downloader | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/173/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/173/timeline | null | null | true | [
"Thanks for cleaning up the extracted dummy data folders! Instead of changing the file_utils we could also just put these folders under `.gitignore` (or maybe already done?).",
"Awesome! I guess you might have to add the changes for the MockDLManager now in a different file though because of my last PR - sorry!"
] |
https://api.github.com/repos/huggingface/datasets/issues/172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/172/comments | https://api.github.com/repos/huggingface/datasets/issues/172/events | https://github.com/huggingface/datasets/issues/172 | 621,377,386 | MDU6SXNzdWU2MjEzNzczODY= | 172 | Clone not working on Windows environment | {
"avatar_url": "https://avatars.githubusercontent.com/u/51091425?v=4",
"events_url": "https://api.github.com/users/codehunk628/events{/privacy}",
"followers_url": "https://api.github.com/users/codehunk628/followers",
"following_url": "https://api.github.com/users/codehunk628/following{/other_user}",
"gists_url": "https://api.github.com/users/codehunk628/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/codehunk628",
"id": 51091425,
"login": "codehunk628",
"node_id": "MDQ6VXNlcjUxMDkxNDI1",
"organizations_url": "https://api.github.com/users/codehunk628/orgs",
"received_events_url": "https://api.github.com/users/codehunk628/received_events",
"repos_url": "https://api.github.com/users/codehunk628/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/codehunk628/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codehunk628/subscriptions",
"type": "User",
"url": "https://api.github.com/users/codehunk628"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 2 | "2020-05-20T00:45:14Z" | "2020-05-23T12:49:13Z" | "2020-05-23T11:27:52Z" | CONTRIBUTOR | null | null | null | Cloning in a windows environment is not working because of use of special character '?' in folder name ..
Please consider changing the folder name ....
Reference to folder -
nlp/datasets/cnn_dailymail/dummy/3.0.0/3.0.0/dummy_data-zip-extracted/dummy_data/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs/dailymail/stories/
error log:
fatal: cannot create directory at 'datasets/cnn_dailymail/dummy/3.0.0/3.0.0/dummy_data-zip-extracted/dummy_data/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs': Invalid argument
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/172/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/172/timeline | null | completed | false | [
"Should be fixed on master now :)",
"Thanks @lhoestq 👍 Now I can uninstall WSL and get back to work with windows.🙂"
] |
https://api.github.com/repos/huggingface/datasets/issues/171 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/171/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/171/comments | https://api.github.com/repos/huggingface/datasets/issues/171/events | https://github.com/huggingface/datasets/pull/171 | 621,199,128 | MDExOlB1bGxSZXF1ZXN0NDIwMjk0ODM0 | 171 | fix squad metric format | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 5 | "2020-05-19T18:37:36Z" | "2020-05-22T13:36:50Z" | "2020-05-22T13:36:48Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/171.diff",
"html_url": "https://github.com/huggingface/datasets/pull/171",
"merged_at": "2020-05-22T13:36:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/171.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/171"
} | The format of the squad metric was wrong.
This should fix #143
I tested with
```python3
predictions = [
{'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'}
]
references = [
{'answers': [{'text': 'Denver Broncos'}], 'id': '56be4db0acb8001400a502ec'}
]
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/171/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/171/timeline | null | null | true | [
"One thing for SQuAD is that I wanted to be able to use the SQuAD dataset directly in the metrics and I'm not sure it will be possible with this format.\r\n\r\n(maybe it's not really possible in general though)",
"This is kinda related to one thing I had in mind which is that we may want to be able to dump our model predictions in a `Dataset` as well so that we don't keep them in memory (and we can export them in a nice format later as well when we will have a serialization formats).\r\n\r\nMaybe this is overkill though, I haven't fully wraped my head around this.",
"I'm also perfectly fine with merging this PR in the current state and working on a larger scope later.",
"This is the format needed to run the official script directly. The format of the squad dataset is different from the input of the metric. \r\n\r\n> One thing for SQuAD is that I wanted to be able to use the SQuAD dataset directly in the metrics and I'm not sure it will be possible with this format.\r\n> \r\n> (maybe it's not really possible in general though)\r\n\r\nOk I see. I'll try to use the same format",
"Ok with this update I changed the format to fit the squad dataset format.\r\nNow you can do:\r\n```python\r\nsquad_dset = nlp.load_dataset(\"squad\")\r\nsquad_metric = nlp.load_metric(\"/Users/quentinlhoest/Desktop/hf/nlp-bis/metrics/squad\")\r\npredictions = [\r\n {\"id\": v[\"id\"], \"prediction_text\": v[\"answers\"][\"text\"][0]} # take first possible answer\r\n for v in squad_dset[\"validation\"]\r\n]\r\nsquad_metric.compute(predictions, squad_dset[\"validation\"])\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/170/comments | https://api.github.com/repos/huggingface/datasets/issues/170/events | https://github.com/huggingface/datasets/pull/170 | 621,119,747 | MDExOlB1bGxSZXF1ZXN0NDIwMjMwMDIx | 170 | Rename anli dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-05-19T16:26:57Z" | "2020-05-20T12:23:09Z" | "2020-05-20T12:23:08Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/170.diff",
"html_url": "https://github.com/huggingface/datasets/pull/170",
"merged_at": "2020-05-20T12:23:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/170.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/170"
} | What we have now as the `anli` dataset is actually the αNLI dataset from the ART challenge dataset. This name is confusing because `anli` is also the name of adversarial NLI (see [https://github.com/facebookresearch/anli](https://github.com/facebookresearch/anli)).
I renamed the current `anli` dataset by `art`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/170/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/170/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/169/comments | https://api.github.com/repos/huggingface/datasets/issues/169/events | https://github.com/huggingface/datasets/pull/169 | 621,099,682 | MDExOlB1bGxSZXF1ZXN0NDIwMjE1NDkw | 169 | Adding Qanta (Quizbowl) Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1382460?v=4",
"events_url": "https://api.github.com/users/EntilZha/events{/privacy}",
"followers_url": "https://api.github.com/users/EntilZha/followers",
"following_url": "https://api.github.com/users/EntilZha/following{/other_user}",
"gists_url": "https://api.github.com/users/EntilZha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/EntilZha",
"id": 1382460,
"login": "EntilZha",
"node_id": "MDQ6VXNlcjEzODI0NjA=",
"organizations_url": "https://api.github.com/users/EntilZha/orgs",
"received_events_url": "https://api.github.com/users/EntilZha/received_events",
"repos_url": "https://api.github.com/users/EntilZha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/EntilZha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EntilZha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/EntilZha"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
] | null | 5 | "2020-05-19T16:03:01Z" | "2020-05-26T12:52:31Z" | "2020-05-26T12:52:31Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/169.diff",
"html_url": "https://github.com/huggingface/datasets/pull/169",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/169.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/169"
} | This PR adds the qanta question answering datasets from [Quizbowl: The Case for Incremental Question Answering](https://arxiv.org/abs/1904.04792) and [Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples](https://www.aclweb.org/anthology/Q19-1029/) (adversarial fold)
This partially continues a discussion around fixing dummy data from https://github.com/huggingface/nlp/issues/161
I ran the following code to double check that it works and did some sanity checks on the output. The majority of the code itself is from our `allennlp` version of the dataset reader.
```python
import nlp
# Default is full question
data = nlp.load_dataset('./datasets/qanta')
# Four configs
# Primarily useful for training
data = nlp.load_dataset('./datasets/qanta', 'mode=sentences,char_skip=25')
# Primarily used in evaluation
data = nlp.load_dataset('./datasets/qanta', 'mode=first,char_skip=25')
data = nlp.load_dataset('./datasets/qanta', 'mode=full,char_skip=25')
# Primarily useful in evaluation and "live" play
data = nlp.load_dataset('./datasets/qanta', 'mode=runs,char_skip=25')
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/169/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/169/timeline | null | null | true | [
"Hi @EntilZha - sorry for waiting so long until taking action here. We created a new command and a new recipe of how to add dummy_data. Can you maybe rebase to `master` as explained in 7. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-contribute-to-nlp and check that your dummy data is correct following the instructions here: https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset ? \r\n\r\nIf the tests described in 5. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset pass we can merge the PR :-) ",
"I updated to the most recent master and followed the steps, but still having the similar error where it can't find the correct file since the path to the directory is given, rather than the individual files within them. This still something wrong about how I'm inputting the data or how the tests are reading it?",
"It's the dummy_data structure. You actually have to call the dummy data file name `dummy_data` (not .json anything). So there should not be a `dummy_data` folder but for each config only a `dummy_data` which contains your json dummy data. Can you maybe try once more - if it doesn't work I do it for you :-). ",
"Would that work if there are multiple files? In my case, I'm including something similar to squad 1.0/2.0 where we have the main dataset + an additional challenge set in different files. Would I have the zip decompress to two files in that case?",
"This dataset was actually a special case. It helped us improve the dummy data instructions :-), see #195 .Close this PR and merge #194."
] |
https://api.github.com/repos/huggingface/datasets/issues/168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/168/comments | https://api.github.com/repos/huggingface/datasets/issues/168/events | https://github.com/huggingface/datasets/issues/168 | 620,959,819 | MDU6SXNzdWU2MjA5NTk4MTk= | 168 | Loading 'wikitext' dataset fails | {
"avatar_url": "https://avatars.githubusercontent.com/u/25987633?v=4",
"events_url": "https://api.github.com/users/itay1itzhak/events{/privacy}",
"followers_url": "https://api.github.com/users/itay1itzhak/followers",
"following_url": "https://api.github.com/users/itay1itzhak/following{/other_user}",
"gists_url": "https://api.github.com/users/itay1itzhak/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/itay1itzhak",
"id": 25987633,
"login": "itay1itzhak",
"node_id": "MDQ6VXNlcjI1OTg3NjMz",
"organizations_url": "https://api.github.com/users/itay1itzhak/orgs",
"received_events_url": "https://api.github.com/users/itay1itzhak/received_events",
"repos_url": "https://api.github.com/users/itay1itzhak/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/itay1itzhak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itay1itzhak/subscriptions",
"type": "User",
"url": "https://api.github.com/users/itay1itzhak"
} | [] | closed | false | null | [] | null | 6 | "2020-05-19T13:04:29Z" | "2020-05-26T21:46:52Z" | "2020-05-26T21:46:52Z" | NONE | null | null | null | Loading the 'wikitext' dataset fails with Attribute error:
Code to reproduce (From example notebook):
import nlp
wikitext_dataset = nlp.load_dataset('wikitext')
Error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-17-d5d9df94b13c> in <module>()
11
12 # Load a dataset and print the first examples in the training set
---> 13 wikitext_dataset = nlp.load_dataset('wikitext')
14 print(wikitext_dataset['train'][0])
6 frames
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
518 download_mode=download_mode,
519 ignore_verifications=ignore_verifications,
--> 520 save_infos=save_infos,
521 )
522
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)
363 verify_infos = not save_infos and not ignore_verifications
364 self._download_and_prepare(
--> 365 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
366 )
367 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
416 try:
417 # Prepare split will record examples associated to the split
--> 418 self._prepare_split(split_generator, **prepare_split_kwargs)
419 except OSError:
420 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or ""))
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator)
594 example = self.info.features.encode_example(record)
595 writer.write(example)
--> 596 num_examples, num_bytes = writer.finalize()
597
598 assert num_examples == num_examples, f"Expected to write {split_info.num_examples} but wrote {num_examples}"
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in finalize(self, close_stream)
173 def finalize(self, close_stream=True):
174 if self.pa_writer is not None:
--> 175 self.write_on_file()
176 self.pa_writer.close()
177 if close_stream:
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_on_file(self)
124 else:
125 # All good
--> 126 self._write_array_on_file(pa_array)
127 self.current_rows = []
128
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in _write_array_on_file(self, pa_array)
93 def _write_array_on_file(self, pa_array):
94 """Write a PyArrow Array"""
---> 95 pa_batch = pa.RecordBatch.from_struct_array(pa_array)
96 self._num_bytes += pa_array.nbytes
97 self.pa_writer.write_batch(pa_batch)
AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/168/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/168/timeline | null | completed | false | [
"Hi, make sure you have a recent version of pyarrow.\r\n\r\nAre you using it in Google Colab? In this case, this error is probably the same as #128",
"Thanks!\r\n\r\nYes I'm using Google Colab, it seems like a duplicate then.",
"Closing as it is a duplicate",
"Hi,\r\nThe squad bug seems to be fixed, but the loading of the 'wikitext' still suffers from this problem (on Colab with pyarrow=0.17.1).",
"When you install `nlp` for the first time on a Colab runtime, it updates the `pyarrow` library that was already on colab. This update shows this message on colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\nYou just have to restart the runtime and it should be fine.",
"That was it, thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/167/comments | https://api.github.com/repos/huggingface/datasets/issues/167/events | https://github.com/huggingface/datasets/pull/167 | 620,908,786 | MDExOlB1bGxSZXF1ZXN0NDIwMDY0NDMw | 167 | [Tests] refactor tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 1 | "2020-05-19T11:43:32Z" | "2020-05-19T16:17:12Z" | "2020-05-19T16:17:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/167.diff",
"html_url": "https://github.com/huggingface/datasets/pull/167",
"merged_at": "2020-05-19T16:17:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/167.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/167"
} | This PR separates AWS and Local tests to remove these ugly statements in the script:
```python
if "/" not in dataset_name:
logging.info("Skip {} because it is a canonical dataset")
return
```
To run a `aws` test, one should now run the following command:
```python
pytest -s tests/test_dataset_common.py::AWSDatasetTest::test_builder_class_wmt14
```
The same `local` test, can be run with:
```python
pytest -s tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_wmt14
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/167/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/167/timeline | null | null | true | [
"Nice !"
] |
https://api.github.com/repos/huggingface/datasets/issues/166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/166/comments | https://api.github.com/repos/huggingface/datasets/issues/166/events | https://github.com/huggingface/datasets/issues/166 | 620,850,218 | MDU6SXNzdWU2MjA4NTAyMTg= | 166 | Add a method to shuffle a dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | closed | false | null | [] | null | 4 | "2020-05-19T10:08:46Z" | "2020-06-23T15:07:33Z" | "2020-06-23T15:07:32Z" | MEMBER | null | null | null | Could maybe be a `dataset.shuffle(generator=None, seed=None)` signature method.
Also, we could maybe have a clear indication of which method modify in-place and which methods return/cache a modified dataset. I kinda like torch conversion of having an underscore suffix for all the methods which modify a dataset in-place. What do you think? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/166/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/166/timeline | null | completed | false | [
"+1 for the naming convention\r\n\r\nAbout the `shuffle` method, from my understanding it should be done in `Dataloader` (better separation between dataset processing - usage)",
"+1 for shuffle in `Dataloader`. \r\nSome `Dataloader` just store idxs of dataset and just shuffle those idxs, which might(?) be faster than do shuffle in dataset, especially when doing shuffle every epoch.\r\n\r\nAlso +1 for the naming convention.",
"As you might already know the issue of dataset shuffling came up in the nlp code [walkthrough](https://youtu.be/G3pOvrKkFuk?t=3204) by Yannic Kilcher\r\n",
"We added the `.shuffle` method :)\r\n\r\nClosing this one."
] |
https://api.github.com/repos/huggingface/datasets/issues/165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/165/comments | https://api.github.com/repos/huggingface/datasets/issues/165/events | https://github.com/huggingface/datasets/issues/165 | 620,758,221 | MDU6SXNzdWU2MjA3NTgyMjE= | 165 | ANLI | {
"avatar_url": "https://avatars.githubusercontent.com/u/6024930?v=4",
"events_url": "https://api.github.com/users/douwekiela/events{/privacy}",
"followers_url": "https://api.github.com/users/douwekiela/followers",
"following_url": "https://api.github.com/users/douwekiela/following{/other_user}",
"gists_url": "https://api.github.com/users/douwekiela/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/douwekiela",
"id": 6024930,
"login": "douwekiela",
"node_id": "MDQ6VXNlcjYwMjQ5MzA=",
"organizations_url": "https://api.github.com/users/douwekiela/orgs",
"received_events_url": "https://api.github.com/users/douwekiela/received_events",
"repos_url": "https://api.github.com/users/douwekiela/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/douwekiela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/douwekiela/subscriptions",
"type": "User",
"url": "https://api.github.com/users/douwekiela"
} | [] | closed | false | null | [] | null | 0 | "2020-05-19T07:50:57Z" | "2020-05-20T12:23:07Z" | "2020-05-20T12:23:07Z" | NONE | null | null | null | Can I recommend the following:
For ANLI, use https://github.com/facebookresearch/anli. As that paper says, "Our dataset is not
to be confused with abductive NLI (Bhagavatula et al., 2019), which calls itself αNLI, or ART.".
Indeed, the paper cited under what is currently called anli says in the abstract "We introduce a challenge dataset, ART".
The current naming will confuse people :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/165/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/165/timeline | null | completed | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/164/comments | https://api.github.com/repos/huggingface/datasets/issues/164/events | https://github.com/huggingface/datasets/issues/164 | 620,540,250 | MDU6SXNzdWU2MjA1NDAyNTA= | 164 | Add Spanish POR and NER Datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mrm8488",
"id": 3653789,
"login": "mrm8488",
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mrm8488"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 2 | "2020-05-18T22:18:21Z" | "2020-05-25T16:28:45Z" | "2020-05-25T16:28:45Z" | CONTRIBUTOR | null | null | null | Hi guys,
In order to cover multilingual support a little step could be adding standard Datasets used for Spanish NER and POS tasks.
I can provide it in raw and preprocessed formats. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/164/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/164/timeline | null | completed | false | [
"Hello @mrm8488, are these datasets official datasets published in an NLP/CL/ML venue?",
"What about this one: https://github.com/ccasimiro88/TranslateAlignRetrieve?"
] |
https://api.github.com/repos/huggingface/datasets/issues/163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/163/comments | https://api.github.com/repos/huggingface/datasets/issues/163/events | https://github.com/huggingface/datasets/issues/163 | 620,534,307 | MDU6SXNzdWU2MjA1MzQzMDc= | 163 | [Feature request] Add cos-e v1.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sarahwie",
"id": 8027676,
"login": "sarahwie",
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sarahwie"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 10 | "2020-05-18T22:05:26Z" | "2020-06-16T23:15:25Z" | "2020-06-16T18:52:06Z" | NONE | null | null | null | I noticed the second release of cos-e (v1.11) is included in this repo. I wanted to request inclusion of v1.0, since this is the version on which results are reported on in [the paper](https://www.aclweb.org/anthology/P19-1487/), and v1.11 has noted [annotation](https://github.com/salesforce/cos-e/issues/2) [issues](https://arxiv.org/pdf/2004.14546.pdf). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/163/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/163/timeline | null | completed | false | [
"Sounds good, @mariamabarham do you want to give a look?\r\nI think we should have two configurations so we can allow either version of the dataset to be loaded with the `1.0` version being the default maybe.\r\n\r\nCc some authors of the great cos-e: @nazneenrajani @bmccann",
"cos_e v1.0 is related to CQA v1.0 but only CQA v1.11 dataset is available on their website. Indeed their is lots of ids in cos_e v1, which are not in CQA v1.11 or the other way around.\r\n@sarahwie, @thomwolf, @nazneenrajani, @bmccann do you know where I can find CQA v1.0\r\n",
"@mariamabarham I'm also not sure where to find CQA 1.0. Perhaps it's not possible to include this version of the dataset. I'll close the issue if that's the case.",
"I do have a copy of the dataset. I can upload it to our repo.",
"Great @nazneenrajani. let me know once done.\r\nThanks",
"@mariamabarham @sarahwie I added them to the cos-e repo https://github.com/salesforce/cos-e/tree/master/data/v1.0",
"You can now do\r\n```python\r\nfrom nlp import load_dataset\r\ncos_e = load_dataset(\"cos_e\", \"v1.0\")\r\n```\r\nThanks @mariamabarham !",
"Thanks!",
"@mariamabarham Just wanted to note that default behavior `cos_e = load_dataset(\"cos_e\")` now loads `v1.0`. Not sure if this is intentional (but the flag specification does work as intended). ",
"> @mariamabarham Just wanted to note that default behavior `cos_e = load_dataset(\"cos_e\")` now loads `v1.0`. Not sure if this is intentional (but the flag specification does work as intended).\r\n\r\nIn the new version of `nlp`, if you try `cos_e = load_dataset(\"cos_e\")` it throws this error:\r\n```\r\nValueError: Config name is missing.\r\nPlease pick one among the available configs: ['v1.0', 'v1.11']\r\nExample of usage:\r\n\t`load_dataset('cos_e', 'v1.0')`\r\n```\r\nFor datasets with at least two configurations, we now force the user to pick one (no default)"
] |
https://api.github.com/repos/huggingface/datasets/issues/162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/162/comments | https://api.github.com/repos/huggingface/datasets/issues/162/events | https://github.com/huggingface/datasets/pull/162 | 620,513,554 | MDExOlB1bGxSZXF1ZXN0NDE5NzQ4Mzky | 162 | fix prev files hash in map | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 3 | "2020-05-18T21:20:51Z" | "2020-05-18T21:36:21Z" | "2020-05-18T21:36:20Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/162.diff",
"html_url": "https://github.com/huggingface/datasets/pull/162",
"merged_at": "2020-05-18T21:36:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/162.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/162"
} | Fix the `.map` issue in #160.
This makes sure it takes the previous files when computing the hash. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/162/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/162/timeline | null | null | true | [
"Awesome! ",
"Hi, yes, this seems to fix #160 -- I cloned the branch locally and verified",
"Perfect then :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/161/comments | https://api.github.com/repos/huggingface/datasets/issues/161/events | https://github.com/huggingface/datasets/issues/161 | 620,487,535 | MDU6SXNzdWU2MjA0ODc1MzU= | 161 | Discussion on version identifier & MockDataLoaderManager for test data | {
"avatar_url": "https://avatars.githubusercontent.com/u/1382460?v=4",
"events_url": "https://api.github.com/users/EntilZha/events{/privacy}",
"followers_url": "https://api.github.com/users/EntilZha/followers",
"following_url": "https://api.github.com/users/EntilZha/following{/other_user}",
"gists_url": "https://api.github.com/users/EntilZha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/EntilZha",
"id": 1382460,
"login": "EntilZha",
"node_id": "MDQ6VXNlcjEzODI0NjA=",
"organizations_url": "https://api.github.com/users/EntilZha/orgs",
"received_events_url": "https://api.github.com/users/EntilZha/received_events",
"repos_url": "https://api.github.com/users/EntilZha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/EntilZha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EntilZha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/EntilZha"
} | [
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
] | null | 12 | "2020-05-18T20:31:30Z" | "2020-05-24T18:10:03Z" | null | CONTRIBUTOR | null | null | null | Hi, I'm working on adding a dataset and ran into an error due to `download` not being defined on `MockDataLoaderManager`, but being defined in `nlp/utils/download_manager.py`. The readme step running this: `RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_localmydatasetname` triggers the error. If I can get something to work, I can include it in my data PR once I'm done. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/161/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/161/timeline | null | null | false | [
"usually you can replace `download` in your dataset script with `download_and_prepare()` - could you share the code for your dataset here? :-) ",
"I have an initial version here: https://github.com/EntilZha/nlp/tree/master/datasets/qanta Thats pretty close to what I'll do as a PR, but still want to do some more sanity checks/tests (just got tests passing).\r\n\r\nI figured out how to get all tests passing by adding a download command and some finagling with the data zip https://github.com/EntilZha/nlp/blob/master/tests/utils.py#L127\r\n\r\n",
"I'm quite positive that you can just replace the `dl_manager.download()` statements here: https://github.com/EntilZha/nlp/blob/4d46443b65f1f756921db8275594e6af008a1de7/datasets/qanta/qanta.py#L194 with `dl_manager.download_and_extract()` even though you don't extract anything. I would prefer to avoid adding more functions to the MockDataLoadManager and keep it as simple as possible (It's already to complex now IMO). \r\n\r\nCould you check if you can replace the `download()` function? ",
"I might be doing something wrong, but swapping those two gives this error:\r\n```\r\n> with open(path) as f:\r\nE IsADirectoryError: [Errno 21] Is a directory: 'datasets/qanta/dummy/mode=first,char_skip=25/2018.4.18/dummy_data-zip-extracted/dummy_data'\r\n\r\nsrc/nlp/datasets/qanta/3d965403133687b819905ead4b69af7bcee365865279b2f797c79f809b4490c3/qanta.py:280: IsADirectoryError\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n```\r\n\r\nSo it seems like the directory name is getting passed. Is this not functioning as expected, or is there some caching happening maybe? I deleted the dummy files and re-ran the import script with no changes. I'm digging a bit in with a debugger, but no clear reason yet",
"From what I can tell here: https://github.com/huggingface/nlp/blob/master/tests/utils.py#L115\r\n\r\n1. `data_url` is the correct http link\r\n2. `path_to_dummy_data` is a directory, which is causing the issue\r\n\r\nThat path comes from `download_dummy_data`, which I think assumes that the data comes from the zip file, but isn't aware of individual files. So it seems like it data manager needs to be aware if the url its getting is for a file or a zip/directory, and pass this information along. This might happen in `download_dummy_data`, but probably better to happen in `download_and_extract`? Maybe a simple check to see if `os.path.basename` returns the dummy data zip filename, if not then join paths with the basename of the url?",
"I think the dataset script works correctly. Just the dummy data structure seems to be wrong. I will soon add more commands that should make the create of the dummy data easier.\r\n\r\nI'd recommend that you won't concentrate too much on the dummy data.\r\nIf you manage to load the dataset correctly via:\r\n\r\n```python \r\n# use local path to qanta\r\nnlp.load_dataset(\"./datasets/qanta\")\r\n```\r\n\r\nthen feel free to open a PR and we will look into the dummy data problem together :-) \r\n\r\nAlso please make sure that the Version is in the format 1.0.0 (three numbers separated by two points) - not a date. ",
"The script loading seems to work fine so I'll work on getting a PR open after a few sanity checks on the data.\r\n\r\nOn version, we currently have it versioned with YYYY.MM.DD scheme so it would be nice to not change that, but will it cause issues?",
"> The script loading seems to work fine so I'll work on getting a PR open after a few sanity checks on the data.\r\n> \r\n> On version, we currently have it versioned with YYYY.MM.DD scheme so it would be nice to not change that, but will it cause issues?\r\n\r\nIt would cause issues for sure for the tests....not sure if it would also cause issues otherwise.\r\n\r\nI would prefer to keep the same version style as we have for other models. You could for example simply add version 1.0.0 and add a comment with the date you currently use for the versioning.\r\n\r\n What is your opinion regarding the version here @lhoestq @mariamabarham @thomwolf ? ",
"Maybe use the YYYY.MM.DD as the config name ? That's what we are doing for wikipedia",
"> Maybe use the YYYY.MM.DD as the config name ? That's what we are doing for wikipedia\r\n\r\nI'm not sure if this will work because the name should be unique and it seems that he has multiple config name in his data with the same version.\r\nAs @patrickvonplaten suggested, I think you can add a comment about the version in the data description.",
"Actually maybe our versioning format (inherited from tfds) is too strong for what we use it for?\r\nWe could allow any string maybe?\r\n\r\nI see it more and more like an identifier for the user that we will back with a serious hashing/versioning system.- so we could let the user quite free on it.",
"I'm good with either putting it in description, adding it to the config, or loosening version formatting. I mostly don't have a full conceptual grasp of what each identifier ends up meaning in the datasets code so hard to evaluate the best approach.\r\n\r\nFor background, the multiple formats is a consequence of:\r\n\r\n1. Each example is one multi-sentence trivia question\r\n2. For training, its better to treat each sentence as an example\r\n3. For evaluation, should test on: (1) first sentence, (2) full question, and (3) partial questions (does the model get the question right having seen the first half)\r\n\r\nWe use the date format for version since: (1) we expect some degree of updates since new questions come in every year and (2) the timestamp itself matches the Wikipedia dump that it is dependent on (so similar to the Wikipedia dataset).\r\n\r\nperhaps this is better discussed in https://github.com/huggingface/nlp/pull/169 or update title?"
] |
https://api.github.com/repos/huggingface/datasets/issues/160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/160/comments | https://api.github.com/repos/huggingface/datasets/issues/160/events | https://github.com/huggingface/datasets/issues/160 | 620,448,236 | MDU6SXNzdWU2MjA0NDgyMzY= | 160 | caching in map causes same result to be returned for train, validation and test | {
"avatar_url": "https://avatars.githubusercontent.com/u/247881?v=4",
"events_url": "https://api.github.com/users/dpressel/events{/privacy}",
"followers_url": "https://api.github.com/users/dpressel/followers",
"following_url": "https://api.github.com/users/dpressel/following{/other_user}",
"gists_url": "https://api.github.com/users/dpressel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dpressel",
"id": 247881,
"login": "dpressel",
"node_id": "MDQ6VXNlcjI0Nzg4MQ==",
"organizations_url": "https://api.github.com/users/dpressel/orgs",
"received_events_url": "https://api.github.com/users/dpressel/received_events",
"repos_url": "https://api.github.com/users/dpressel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dpressel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dpressel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dpressel"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 7 | "2020-05-18T19:22:03Z" | "2020-05-18T21:36:20Z" | "2020-05-18T21:36:20Z" | NONE | null | null | null | hello,
I am working on a program that uses the `nlp` library with the `SST2` dataset.
The rough outline of the program is:
```
import nlp as nlp_datasets
...
parser.add_argument('--dataset', help='HuggingFace Datasets id', default=['glue', 'sst2'], nargs='+')
...
dataset = nlp_datasets.load_dataset(*args.dataset)
...
# Create feature vocabs
vocabs = create_vocabs(dataset.values(), vectorizers)
...
# Create a function to vectorize based on vectorizers and vocabs:
print('TS', train_set.num_rows)
print('VS', valid_set.num_rows)
print('ES', test_set.num_rows)
# factory method to create a `convert_to_features` function based on vocabs
convert_to_features = create_featurizer(vectorizers, vocabs)
train_set = train_set.map(convert_to_features, batched=True)
train_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths'])
train_loader = torch.utils.data.DataLoader(train_set, batch_size=args.batchsz)
valid_set = valid_set.map(convert_to_features, batched=True)
valid_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths'])
valid_loader = torch.utils.data.DataLoader(valid_set, batch_size=args.batchsz)
test_set = test_set.map(convert_to_features, batched=True)
test_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths'])
test_loader = torch.utils.data.DataLoader(test_set, batch_size=args.batchsz)
print('TS', train_set.num_rows)
print('VS', valid_set.num_rows)
print('ES', test_set.num_rows)
```
Im not sure if Im using it incorrectly, but the results are not what I expect. Namely, the `.map()` seems to grab the datset from the cache and then loses track of what the specific dataset is, instead using my training data for all datasets:
```
TS 67349
VS 872
ES 1821
TS 67349
VS 67349
ES 67349
```
The behavior changes if I turn off the caching but then the results fail:
```
train_set = train_set.map(convert_to_features, batched=True, load_from_cache_file=False)
...
valid_set = valid_set.map(convert_to_features, batched=True, load_from_cache_file=False)
...
test_set = test_set.map(convert_to_features, batched=True, load_from_cache_file=False)
```
Now I get the right set of features back...
```
TS 67349
VS 872
ES 1821
100%|██████████| 68/68 [00:00<00:00, 92.78it/s]
100%|██████████| 1/1 [00:00<00:00, 75.47it/s]
0%| | 0/2 [00:00<?, ?it/s]TS 67349
VS 872
ES 1821
100%|██████████| 2/2 [00:00<00:00, 77.19it/s]
```
but I think its losing track of the original training set:
```
Traceback (most recent call last):
File "/home/dpressel/dev/work/baseline/api-examples/layers-classify-hf-datasets.py", line 148, in <module>
for x in train_loader:
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
data = self._next_data()
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 338, in __getitem__
output_all_columns=self._output_all_columns,
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 294, in _getitem
outputs = self._unnest(self._data.slice(key, 1).to_pydict())
File "pyarrow/table.pxi", line 1211, in pyarrow.lib.Table.slice
File "pyarrow/public-api.pxi", line 390, in pyarrow.lib.pyarrow_wrap_table
File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 3: In chunk 0: Invalid: Length spanned by list offsets (15859698) larger than values array (length 100000)
Process finished with exit code 1
```
The full-example program (minus the print stmts) is here:
https://github.com/dpressel/mead-baseline/pull/620/files
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/160/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/160/timeline | null | completed | false | [
"Hi @dpressel, \r\n\r\nthanks for posting your issue! Can you maybe add a complete code snippet that we can copy paste to reproduce the error? For example, I'm not sure where the variable `train_set` comes from in your code and it seems like you are loading multiple datasets at once? ",
"Hi, the full example was listed in the PR above, but here is the exact link:\r\n\r\nhttps://github.com/dpressel/mead-baseline/blob/3c1aa3ca062cb23f303ca98ac40b6652b37ee971/api-examples/layers-classify-hf-datasets.py\r\n\r\nThe problem is coming from\r\n```\r\n if cache_file_name is None:\r\n # we create a unique hash from the function, current dataset file and the mapping args\r\n cache_kwargs = {\r\n \"with_indices\": with_indices,\r\n \"batched\": batched,\r\n \"batch_size\": batch_size,\r\n \"remove_columns\": remove_columns,\r\n \"keep_in_memory\": keep_in_memory,\r\n \"load_from_cache_file\": load_from_cache_file,\r\n \"cache_file_name\": cache_file_name,\r\n \"writer_batch_size\": writer_batch_size,\r\n \"arrow_schema\": arrow_schema,\r\n \"disable_nullable\": disable_nullable,\r\n }\r\n cache_file_name = self._get_cache_file_path(function, cache_kwargs)\r\n```\r\nThe cached value is always the same, but I was able to change that by just renaming the function each time which seems to fix the issue.",
"Ok, I think @lhoestq has already found a solution :-) Maybe you can chime in @lhoestq ",
"This fixed my issue (I think)\r\n\r\nhttps://github.com/dpressel/mead-baseline/commit/48aa8ecde4b307bd3e7dde5fe71e43a1d4769ee1",
"> Ok, I think @lhoestq has already found a solution :-) Maybe you can chime in @lhoestq\r\n\r\nOh, awesome! I see the PR, Ill check it out",
"The PR should prevent the cache from losing track of the of the dataset type (based on the location of its data). Not sure about your second problem though (cache off).",
"Yes, with caching on, it seems to work without the function renaming hack, I mentioned this also in the PR. Thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/159/comments | https://api.github.com/repos/huggingface/datasets/issues/159/events | https://github.com/huggingface/datasets/issues/159 | 620,420,700 | MDU6SXNzdWU2MjA0MjA3MDA= | 159 | How can we add more datasets to nlp library? | {
"avatar_url": "https://avatars.githubusercontent.com/u/17886829?v=4",
"events_url": "https://api.github.com/users/Tahsin-Mayeesha/events{/privacy}",
"followers_url": "https://api.github.com/users/Tahsin-Mayeesha/followers",
"following_url": "https://api.github.com/users/Tahsin-Mayeesha/following{/other_user}",
"gists_url": "https://api.github.com/users/Tahsin-Mayeesha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Tahsin-Mayeesha",
"id": 17886829,
"login": "Tahsin-Mayeesha",
"node_id": "MDQ6VXNlcjE3ODg2ODI5",
"organizations_url": "https://api.github.com/users/Tahsin-Mayeesha/orgs",
"received_events_url": "https://api.github.com/users/Tahsin-Mayeesha/received_events",
"repos_url": "https://api.github.com/users/Tahsin-Mayeesha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Tahsin-Mayeesha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tahsin-Mayeesha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Tahsin-Mayeesha"
} | [] | closed | false | null | [] | null | 1 | "2020-05-18T18:35:31Z" | "2020-05-18T18:37:08Z" | "2020-05-18T18:37:07Z" | NONE | null | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/159/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/159/timeline | null | completed | false | [
"Found it. https://github.com/huggingface/nlp/tree/master/datasets"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/158/comments | https://api.github.com/repos/huggingface/datasets/issues/158/events | https://github.com/huggingface/datasets/pull/158 | 620,396,658 | MDExOlB1bGxSZXF1ZXN0NDE5NjUyNTQy | 158 | add Toronto Books Corpus | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 0 | "2020-05-18T17:54:45Z" | "2020-06-11T07:49:15Z" | "2020-05-19T07:34:56Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/158.diff",
"html_url": "https://github.com/huggingface/datasets/pull/158",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/158.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/158"
} | This PR adds the Toronto Books Corpus.
.
It on consider TMX and plain text files (Moses) defined in the table **Statistics and TMX/Moses Downloads** [here](http://opus.nlpl.eu/Books.php ) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/158/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/158/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/157/comments | https://api.github.com/repos/huggingface/datasets/issues/157/events | https://github.com/huggingface/datasets/issues/157 | 620,356,542 | MDU6SXNzdWU2MjAzNTY1NDI= | 157 | nlp.load_dataset() gives "TypeError: list_() takes exactly one argument (2 given)" | {
"avatar_url": "https://avatars.githubusercontent.com/u/47444392?v=4",
"events_url": "https://api.github.com/users/saahiluppal/events{/privacy}",
"followers_url": "https://api.github.com/users/saahiluppal/followers",
"following_url": "https://api.github.com/users/saahiluppal/following{/other_user}",
"gists_url": "https://api.github.com/users/saahiluppal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/saahiluppal",
"id": 47444392,
"login": "saahiluppal",
"node_id": "MDQ6VXNlcjQ3NDQ0Mzky",
"organizations_url": "https://api.github.com/users/saahiluppal/orgs",
"received_events_url": "https://api.github.com/users/saahiluppal/received_events",
"repos_url": "https://api.github.com/users/saahiluppal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/saahiluppal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saahiluppal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/saahiluppal"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
] | null | 11 | "2020-05-18T16:46:38Z" | "2020-06-05T08:08:58Z" | "2020-06-05T08:08:58Z" | NONE | null | null | null | I'm trying to load datasets from nlp but there seems to have error saying
"TypeError: list_() takes exactly one argument (2 given)"
gist can be found here
https://gist.github.com/saahiluppal/c4b878f330b10b9ab9762bc0776c0a6a | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/157/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/157/timeline | null | completed | false | [
"You can just run: \r\n`val = nlp.load_dataset('squad')` \r\n\r\nif you want to have just the validation script you can also do:\r\n\r\n`val = nlp.load_dataset('squad', split=\"validation\")`",
"If you want to load a local dataset, make sure you include a `./` before the folder name. ",
"This happens by just doing run all cells on colab ... I assumed the colab example is broken. ",
"Oh I see you might have a wrong version of pyarrow install on the colab -> could you try the following. Add these lines to the beginning of your notebook, restart the runtime and run it again:\r\n```\r\n!pip uninstall -y -qq pyarrow\r\n!pip uninstall -y -qq nlp\r\n!pip install -qq git+https://github.com/huggingface/nlp.git\r\n```",
"> Oh I see you might have a wrong version of pyarrow install on the colab -> could you try the following. Add these lines to the beginning of your notebook, restart the runtime and run it again:\r\n> \r\n> ```\r\n> !pip uninstall -y -qq pyarrow\r\n> !pip uninstall -y -qq nlp\r\n> !pip install -qq git+https://github.com/huggingface/nlp.git\r\n> ```\r\n\r\nTried, having the same error.",
"Can you post a link here of your colab? I'll make a copy of it and see what's wrong",
"This should be fixed in the current version of the notebook. You can try it again",
"Also see: https://github.com/huggingface/nlp/issues/222",
"I am getting this error when running this command\r\n```\r\nval = nlp.load_dataset('squad', split=\"validation\")\r\n```\r\n\r\nFileNotFoundError: [Errno 2] No such file or directory: '/root/.cache/huggingface/datasets/squad/plain_text/1.0.0/dataset_info.json'\r\n\r\nCan anybody help?",
"It seems like your download was corrupted :-/ Can you run the following command: \r\n\r\n```\r\nrm -r /root/.cache/huggingface/datasets\r\n```\r\n\r\nto delete the cache completely and rerun the download? ",
"I tried the notebook again today and it worked without barfing. 👌 "
] |
https://api.github.com/repos/huggingface/datasets/issues/156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/156/comments | https://api.github.com/repos/huggingface/datasets/issues/156/events | https://github.com/huggingface/datasets/issues/156 | 620,263,687 | MDU6SXNzdWU2MjAyNjM2ODc= | 156 | SyntaxError with WMT datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/9419158?v=4",
"events_url": "https://api.github.com/users/tomhosking/events{/privacy}",
"followers_url": "https://api.github.com/users/tomhosking/followers",
"following_url": "https://api.github.com/users/tomhosking/following{/other_user}",
"gists_url": "https://api.github.com/users/tomhosking/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tomhosking",
"id": 9419158,
"login": "tomhosking",
"node_id": "MDQ6VXNlcjk0MTkxNTg=",
"organizations_url": "https://api.github.com/users/tomhosking/orgs",
"received_events_url": "https://api.github.com/users/tomhosking/received_events",
"repos_url": "https://api.github.com/users/tomhosking/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tomhosking/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomhosking/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tomhosking"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
] | null | 7 | "2020-05-18T14:38:18Z" | "2020-07-23T16:41:55Z" | "2020-07-23T16:41:55Z" | NONE | null | null | null | The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-8-3206959998b9>", line 3, in <module>
dataset = nlp.load_dataset('wmt14')
File "/home/tom/.local/lib/python3.6/site-packages/nlp/load.py", line 505, in load_dataset
builder_cls = import_main_class(module_path, dataset=True)
File "/home/tom/.local/lib/python3.6/site-packages/nlp/load.py", line 56, in import_main_class
module = importlib.import_module(module_path)
File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/tom/.local/lib/python3.6/site-packages/nlp/datasets/wmt14/c258d646f4f5870b0245f783b7aa0af85c7117e06aacf1e0340bd81935094de2/wmt14.py", line 21, in <module>
from .wmt_utils import Wmt, WmtConfig
File "/home/tom/.local/lib/python3.6/site-packages/nlp/datasets/wmt14/c258d646f4f5870b0245f783b7aa0af85c7117e06aacf1e0340bd81935094de2/wmt_utils.py", line 659
<<<<<<< HEAD
^
SyntaxError: invalid syntax
```
Python version:
`3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]`
Running on Ubuntu 18.04, via a Jupyter notebook | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/156/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/156/timeline | null | completed | false | [
"Jeez - don't know what happened there :D Should be fixed now! \r\n\r\nThanks a lot for reporting this @tomhosking !",
"Hi @patrickvonplaten!\r\n\r\nI'm now getting the below error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-28-3206959998b9> in <module>\r\n 1 import nlp\r\n 2 \r\n----> 3 dataset = nlp.load_dataset('wmt14')\r\n 4 print(dataset['train'][0])\r\n\r\n~/.local/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 507 # Instantiate the dataset builder\r\n 508 builder_instance = builder_cls(\r\n--> 509 cache_dir=cache_dir, name=name, version=version, data_dir=data_dir, data_files=data_files, **config_kwargs,\r\n 510 )\r\n 511 \r\n\r\nTypeError: Can't instantiate abstract class Wmt with abstract methods _subsets\r\n```\r\n\r\n",
"To correct this error I think you need the master branch of `nlp`. Can you try to install `nlp` with. `WMT` was not included at the beta release of the library. \r\n\r\nCan you try:\r\n`pip install git+https://github.com/huggingface/nlp.git`\r\n\r\nand check again? ",
"That works, thanks :)\r\n\r\nThe WMT datasets are listed in by `list_datasets()` in the beta release on pypi - it would be good to only show datasets that are actually supported by that version?",
"Usually, the idea is that a dataset can be added without releasing a new version. The problem in the case of `WMT` was that some \"core\" code of the library had to be changed as well. \r\n\r\n@thomwolf @lhoestq @julien-c - How should we go about this. If we add a dataset that also requires \"core\" code changes, how do we handle the versioning? The moment a dataset is on AWS it will actually be listed with `list_datasets()` in all earlier versions...\r\n\r\nIs there a way to somehow insert the `pip version` to the HfApi() and get only the datasets that were available for this version (at the date of the release of the version) @julien-c ? ",
"We plan to have something like a `requirements.txt` per dataset to prevent user from loading dataset with old version of `nlp` or any other libraries. Right now the solution is just to keep `nlp` up to date when you want to load a dataset that leverages the latests features of `nlp`.\r\n\r\nFor datasets that are on AWS but that use features that are not released yet we should be able to filter those from the `list_dataset` as soon as we have the `requirements.txt` feature on (filter datasets that need a future version of `nlp`).\r\n\r\nShall we rename this issue to be more explicit about the problem ?\r\nSomething like `Specify the minimum version of the nlp library required for each dataset` ?",
"Closing this one.\r\nFeel free to re-open if you have other questions :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/155/comments | https://api.github.com/repos/huggingface/datasets/issues/155/events | https://github.com/huggingface/datasets/pull/155 | 620,067,946 | MDExOlB1bGxSZXF1ZXN0NDE5Mzg1ODM0 | 155 | Include more links in README, fix typos | {
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"events_url": "https://api.github.com/users/bharatr21/events{/privacy}",
"followers_url": "https://api.github.com/users/bharatr21/followers",
"following_url": "https://api.github.com/users/bharatr21/following{/other_user}",
"gists_url": "https://api.github.com/users/bharatr21/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bharatr21",
"id": 13381361,
"login": "bharatr21",
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"organizations_url": "https://api.github.com/users/bharatr21/orgs",
"received_events_url": "https://api.github.com/users/bharatr21/received_events",
"repos_url": "https://api.github.com/users/bharatr21/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bharatr21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bharatr21/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bharatr21"
} | [] | closed | false | null | [] | null | 1 | "2020-05-18T09:47:08Z" | "2020-05-28T08:31:57Z" | "2020-05-28T08:31:57Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/155.diff",
"html_url": "https://github.com/huggingface/datasets/pull/155",
"merged_at": "2020-05-28T08:31:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/155.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/155"
} | Include more links and fix typos in README | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/155/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/155/timeline | null | null | true | [
"I fixed a conflict :) thanks !"
] |
https://api.github.com/repos/huggingface/datasets/issues/154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/154/comments | https://api.github.com/repos/huggingface/datasets/issues/154/events | https://github.com/huggingface/datasets/pull/154 | 620,059,066 | MDExOlB1bGxSZXF1ZXN0NDE5Mzc4Mzgw | 154 | add Ubuntu Dialogs Corpus datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 0 | "2020-05-18T09:34:48Z" | "2020-05-18T10:12:28Z" | "2020-05-18T10:12:27Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/154.diff",
"html_url": "https://github.com/huggingface/datasets/pull/154",
"merged_at": "2020-05-18T10:12:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/154.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/154"
} | This PR adds the Ubuntu Dialog Corpus datasets version 2.0. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/154/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/154/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/153/comments | https://api.github.com/repos/huggingface/datasets/issues/153/events | https://github.com/huggingface/datasets/issues/153 | 619,972,246 | MDU6SXNzdWU2MTk5NzIyNDY= | 153 | Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | open | false | null | [] | null | 4 | "2020-05-18T07:24:22Z" | "2020-05-18T21:18:16Z" | null | MEMBER | null | null | null | Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation/specific homepage/etc are very visible and accessible and not only the generic citation of the meta-dataset itself.
Let's take GLUE as an example:
The configuration has the citation for each dataset included (e.g. [here](https://github.com/huggingface/nlp/blob/master/datasets/glue/glue.py#L154-L161)) but it should be copied inside the dataset info so that, when people access `dataset.info.citation` they get both the citation for GLUE and the citation for the specific datasets inside GLUE that they have loaded. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/153/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/153/timeline | null | null | false | [
"As @yoavgo suggested, there should be the possibility to call a function like nlp.bib that outputs all bibtex ref from the datasets and models actually used and eventually nlp.bib.forreadme that would output the same info + versions numbers so they can be included in a readme.md file.",
"Actually, double checking with @mariamabarham, we already have this feature I think.\r\n\r\nIt's like this currently:\r\n```python\r\n>>> from nlp import load_dataset\r\n>>> \r\n>>> dataset = load_dataset('glue', 'cola', split='train')\r\n>>> print(dataset.info.citation)\r\n@article{warstadt2018neural,\r\n title={Neural Network Acceptability Judgments},\r\n author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},\r\n journal={arXiv preprint arXiv:1805.12471},\r\n year={2018}\r\n}\r\n@inproceedings{wang2019glue,\r\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\r\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\r\n note={In the Proceedings of ICLR.},\r\n year={2019}\r\n}\r\n\r\nNote that each GLUE dataset has its own citation. Please see the source to see\r\nthe correct citation for each contained dataset.\r\n```\r\n\r\nWhat do you think @dseddah?",
"Looks good but why would there be a difference between the ref in the source and the one to be printed? ",
"Yes, I think we should remove this warning @mariamabarham.\r\n\r\nIt's probably a relic of tfds which didn't have the same way to access citations. "
] |
https://api.github.com/repos/huggingface/datasets/issues/152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/152/comments | https://api.github.com/repos/huggingface/datasets/issues/152/events | https://github.com/huggingface/datasets/pull/152 | 619,971,900 | MDExOlB1bGxSZXF1ZXN0NDE5MzA4OTE2 | 152 | Add GLUE config name check | {
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"events_url": "https://api.github.com/users/bharatr21/events{/privacy}",
"followers_url": "https://api.github.com/users/bharatr21/followers",
"following_url": "https://api.github.com/users/bharatr21/following{/other_user}",
"gists_url": "https://api.github.com/users/bharatr21/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bharatr21",
"id": 13381361,
"login": "bharatr21",
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"organizations_url": "https://api.github.com/users/bharatr21/orgs",
"received_events_url": "https://api.github.com/users/bharatr21/received_events",
"repos_url": "https://api.github.com/users/bharatr21/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bharatr21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bharatr21/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bharatr21"
} | [] | closed | false | null | [] | null | 5 | "2020-05-18T07:23:43Z" | "2020-05-27T22:09:12Z" | "2020-05-27T22:09:12Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/152.diff",
"html_url": "https://github.com/huggingface/datasets/pull/152",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/152.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/152"
} | Fixes #130 by adding a name check to the Glue class | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/152/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/152/timeline | null | null | true | [
"If tests are being added, any guidance on where to add tests would be helpful!\r\n\r\nTagging @thomwolf for review",
"Looks good to me. Is this compatible with the way we are doing tests right now @patrickvonplaten ?",
"If the tests pass it should be fine :-) \r\n\r\n@Bharat123rox could you check whether the tests pass locally via: \r\n`pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_glue`",
"The test fails with an `AssertionError` because the name is not being passed to kwargs, however I'm not sure how to do that, because only the config file is being passed to the tests of all datasets?\r\n\r\nI'm guessing this is the corresponding code:\r\nhttps://github.com/huggingface/nlp/blob/2b3621bb5c78caf02c5a969b8e67fa0c145da4e6/tests/test_dataset_common.py#L141-L143\r\n\r\nAnd these are the logs:\r\n```\r\n___________________ DatasetTest.test_load_dataset_local_glue ___________________\r\n\r\nself = <tests.test_dataset_common.DatasetTest testMethod=test_load_dataset_local_glue>\r\ndataset_name = 'glue'\r\n\r\n @local\r\n def test_load_dataset_local(self, dataset_name):\r\n # test only first config\r\n if \"/\" in dataset_name:\r\n logging.info(\"Skip {} because it is not a canonical dataset\")\r\n return\r\n\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True)\r\n\r\ntests/test_dataset_common.py:200:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ntests/test_dataset_common.py:74: in check_load_dataset\r\n dataset_builder = dataset_builder_cls(config=config, cache_dir=processed_temp_dir)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = <nlp.datasets.glue.fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597.glue.Glue object at 0x135c0ea90>\r\nargs = ()\r\nkwargs = {'cache_dir': '/var/folders/r6/mnw5ntvn5y72j7d4s1fm273m0000gn/T/tmpa9rpq3tl', 'config': GlueConfig(name='cola', versio...linguistic theory. Each example is a sequence of words annotated\\nwith whether it is a grammatical English sentence.')}\r\n\r\n def __init__(self, *args, **kwargs):\r\n> assert ('name' in kwargs and kwargs['name'] is not None), \"Glue has to be called with a configuration name\"\r\nE AssertionError: Glue has to be called with a configuration name\r\n\r\n/usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.py:139: AssertionError\r\n----------------------------- Captured stderr call -----------------------------\r\nINFO:nlp.load:Checking ./datasets/glue/glue.py for additional imports.\r\nINFO:filelock:Lock 5209998288 acquired on ./datasets/glue/glue.py.lock\r\nINFO:nlp.load:Found main folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue\r\nINFO:nlp.load:Found specific version folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\r\nINFO:nlp.load:Found script file from ./datasets/glue/glue.py to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.py\r\nINFO:nlp.load:Found dataset infos file from ./datasets/glue/dataset_infos.json to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/dataset_infos.json\r\nINFO:nlp.load:Found metadata file for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.json\r\nINFO:filelock:Lock 5209998288 released on ./datasets/glue/glue.py.lock\r\nINFO:nlp.load:Checking ./datasets/glue/glue.py for additional imports.\r\nINFO:filelock:Lock 5196802640 acquired on ./datasets/glue/glue.py.lock\r\nINFO:nlp.load:Found main folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue\r\nINFO:nlp.load:Found specific version folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\r\nINFO:nlp.load:Found script file from ./datasets/glue/glue.py to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.py\r\nINFO:nlp.load:Found dataset infos file from ./datasets/glue/dataset_infos.json to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/dataset_infos.json\r\nINFO:nlp.load:Found metadata file for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.json\r\nINFO:filelock:Lock 5196802640 released on ./datasets/glue/glue.py.lock\r\n------------------------------ Captured log call -------------------------------\r\nINFO nlp.load:load.py:157 Checking ./datasets/glue/glue.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 5209998288 acquired on ./datasets/glue/glue.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\r\nINFO nlp.load:load.py:346 Found script file from ./datasets/glue/glue.py to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.py\r\nINFO nlp.load:load.py:356 Found dataset infos file from ./datasets/glue/dataset_infos.json to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/dataset_infos.json\r\nINFO nlp.load:load.py:367 Found metadata file for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.json\r\nINFO filelock:filelock.py:318 Lock 5209998288 released on ./datasets/glue/glue.py.lock\r\nINFO nlp.load:load.py:157 Checking ./datasets/glue/glue.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 5196802640 acquired on ./datasets/glue/glue.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\r\nINFO nlp.load:load.py:346 Found script file from ./datasets/glue/glue.py to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.py\r\nINFO nlp.load:load.py:356 Found dataset infos file from ./datasets/glue/dataset_infos.json to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/dataset_infos.json\r\nINFO nlp.load:load.py:367 Found metadata file for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.json\r\nINFO filelock:filelock.py:318 Lock 5196802640 released on ./datasets/glue/glue.py.lock\r\n```",
"Closing as #130 is fixed !"
] |
https://api.github.com/repos/huggingface/datasets/issues/151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/151/comments | https://api.github.com/repos/huggingface/datasets/issues/151/events | https://github.com/huggingface/datasets/pull/151 | 619,968,480 | MDExOlB1bGxSZXF1ZXN0NDE5MzA2MTYz | 151 | Fix JSON tests. | {
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jplu",
"id": 959590,
"login": "jplu",
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"repos_url": "https://api.github.com/users/jplu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jplu"
} | [] | closed | false | null | [] | null | 0 | "2020-05-18T07:17:38Z" | "2020-05-18T07:21:52Z" | "2020-05-18T07:21:51Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/151.diff",
"html_url": "https://github.com/huggingface/datasets/pull/151",
"merged_at": "2020-05-18T07:21:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/151.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/151"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/151/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/151/timeline | null | null | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/150/comments | https://api.github.com/repos/huggingface/datasets/issues/150/events | https://github.com/huggingface/datasets/pull/150 | 619,809,645 | MDExOlB1bGxSZXF1ZXN0NDE5MTgyODU4 | 150 | Add WNUT 17 NER dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stefan-it",
"id": 20651387,
"login": "stefan-it",
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stefan-it"
} | [] | closed | false | null | [] | null | 4 | "2020-05-17T22:19:04Z" | "2020-05-26T20:37:59Z" | "2020-05-26T20:37:59Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/150.diff",
"html_url": "https://github.com/huggingface/datasets/pull/150",
"merged_at": "2020-05-26T20:37:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/150.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/150"
} | Hi,
this PR adds the WNUT 17 dataset to `nlp`.
> Emerging and Rare entity recognition
> This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisation), but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Take for example the tweet “so.. kktny in 30 mins?” - even human experts find entity kktny hard to detect and resolve. This task will evaluate the ability to detect and classify novel, emerging, singleton named entities in noisy text.
>
> The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities.
More information about the dataset can be found on the [shared task page](https://noisy-text.github.io/2017/emerging-rare-entities.html).
Dataset is taken is taken from their [GitHub repository](https://github.com/leondz/emerging_entities_17), because the data provided in this repository contains minor fixes in the dataset format.
## Usage
Then the WNUT 17 dataset can be used in `nlp` like this:
```python
import nlp
wnut_17 = nlp.load_dataset("./datasets/wnut_17/wnut_17.py")
print(wnut_17)
```
This outputs:
```txt
'train': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 3394)
'validation': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 1009)
'test': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 1287)
```
Number are identical with the ones in [this paper](https://www.ijcai.org/Proceedings/2019/0702.pdf) and are the same as using the `dataset` reader in Flair.
## Features
The following feature format is used to represent a sentence in the WNUT 17 dataset:
| Feature | Example | Description
| ---- | ---- | -----------------
| `id` | `0` | Number (id) of current sentence
| `tokens` | `["AHFA", "extends", "deadline"]` | List of tokens (strings) for a sentence
| `labels` | `["B-group", "O", "O"]` | List of labels (outer span)
The following labels are used in WNUT 17:
```txt
O
B-corporation
I-corporation
B-location
I-location
B-product
I-product
B-person
I-person
B-group
I-group
B-creative-work
I-creative-work
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/150/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/150/timeline | null | null | true | [
"The PR looks awesome! \r\nSince you have already added a dataset I imagine the tests as described in 5. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset all pass, right @stefan-it ?\r\n\r\nI think we are then good to merge this :-) @lhoestq ",
"Nice !\r\n\r\nOne thing though: I saw that you copied the `dataset_info.json` (one split info), which is different from the `dataset_infos.json` (split infos of all configs) that we expect.\r\n\r\nCould you generate the `dataset_infos.json` file using this command please ?\r\n```\r\npython nlp-cli test datasets/wnut_17 --save_infos --all_configs\r\n```",
"Hi @patrickvonplaten I just rebased onto latest `master` version and executed the commands. All tests passed then :)\r\n\r\n@lhoestq thanks for that hint! I've generated and added the `dataset_infos.json` and deleted `dataset_info.json`.",
"Awesome ! I guess it's ready to be merged now :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/149/comments | https://api.github.com/repos/huggingface/datasets/issues/149/events | https://github.com/huggingface/datasets/issues/149 | 619,735,739 | MDU6SXNzdWU2MTk3MzU3Mzk= | 149 | [Feature request] Add Ubuntu Dialogue Corpus dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/28959268?v=4",
"events_url": "https://api.github.com/users/danth/events{/privacy}",
"followers_url": "https://api.github.com/users/danth/followers",
"following_url": "https://api.github.com/users/danth/following{/other_user}",
"gists_url": "https://api.github.com/users/danth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/danth",
"id": 28959268,
"login": "danth",
"node_id": "MDQ6VXNlcjI4OTU5MjY4",
"organizations_url": "https://api.github.com/users/danth/orgs",
"received_events_url": "https://api.github.com/users/danth/received_events",
"repos_url": "https://api.github.com/users/danth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/danth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/danth"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 1 | "2020-05-17T15:42:39Z" | "2020-05-18T17:01:46Z" | "2020-05-18T17:01:46Z" | NONE | null | null | null | https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/149/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/149/timeline | null | completed | false | [
"@AlphaMycelium the Ubuntu Dialogue Corpus [version 2]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator) is added. Note that it requires a manual download by following the download instructions in the [repos]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator).\r\nMaybe we can close this issue for now?"
] |
https://api.github.com/repos/huggingface/datasets/issues/148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/148/comments | https://api.github.com/repos/huggingface/datasets/issues/148/events | https://github.com/huggingface/datasets/issues/148 | 619,590,555 | MDU6SXNzdWU2MTk1OTA1NTU= | 148 | _download_and_prepare() got an unexpected keyword argument 'verify_infos' | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | 2 | "2020-05-17T01:48:53Z" | "2020-05-18T07:38:33Z" | "2020-05-18T07:38:33Z" | CONTRIBUTOR | null | null | null | # Reproduce
In Colab,
```
%pip install -q nlp
%pip install -q apache_beam mwparserfromhell
dataset = nlp.load_dataset('wikipedia')
```
get
```
Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wikipedia/20200501.aa/1.0.0...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-52471d2a0088> in <module>()
----> 1 dataset = nlp.load_dataset('wikipedia')
1 frames
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
515 download_mode=download_mode,
516 ignore_verifications=ignore_verifications,
--> 517 save_infos=save_infos,
518 )
519
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)
361 verify_infos = not save_infos and not ignore_verifications
362 self._download_and_prepare(
--> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
364 )
365 # Sync info
TypeError: _download_and_prepare() got an unexpected keyword argument 'verify_infos'
``` | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/148/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/148/timeline | null | completed | false | [
"Same error for dataset 'wiki40b'",
"Should be fixed on master :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/147/comments | https://api.github.com/repos/huggingface/datasets/issues/147/events | https://github.com/huggingface/datasets/issues/147 | 619,581,907 | MDU6SXNzdWU2MTk1ODE5MDc= | 147 | Error with sklearn train_test_split | {
"avatar_url": "https://avatars.githubusercontent.com/u/6853743?v=4",
"events_url": "https://api.github.com/users/ClonedOne/events{/privacy}",
"followers_url": "https://api.github.com/users/ClonedOne/followers",
"following_url": "https://api.github.com/users/ClonedOne/following{/other_user}",
"gists_url": "https://api.github.com/users/ClonedOne/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ClonedOne",
"id": 6853743,
"login": "ClonedOne",
"node_id": "MDQ6VXNlcjY4NTM3NDM=",
"organizations_url": "https://api.github.com/users/ClonedOne/orgs",
"received_events_url": "https://api.github.com/users/ClonedOne/received_events",
"repos_url": "https://api.github.com/users/ClonedOne/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ClonedOne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ClonedOne/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ClonedOne"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 2 | "2020-05-17T00:28:24Z" | "2020-06-18T16:23:23Z" | "2020-06-18T16:23:23Z" | NONE | null | null | null | It would be nice if we could use sklearn `train_test_split` to quickly generate subsets from the dataset objects returned by `nlp.load_dataset`. At the moment the code:
```python
data = nlp.load_dataset('imdb', cache_dir=data_cache)
f_half, s_half = train_test_split(data['train'], test_size=0.5, random_state=seed)
```
throws:
```
ValueError: Can only get row(s) (int or slice) or columns (string).
```
It's not a big deal, since there are other ways to split the data, but it would be a cool thing to have. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/147/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/147/timeline | null | completed | false | [
"Indeed. Probably we will want to have a similar method directly in the library",
"Related: #166 "
] |
https://api.github.com/repos/huggingface/datasets/issues/146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/146/comments | https://api.github.com/repos/huggingface/datasets/issues/146/events | https://github.com/huggingface/datasets/pull/146 | 619,564,653 | MDExOlB1bGxSZXF1ZXN0NDE5MDI5MjUx | 146 | Add BERTScore to metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/7753366?v=4",
"events_url": "https://api.github.com/users/felixgwu/events{/privacy}",
"followers_url": "https://api.github.com/users/felixgwu/followers",
"following_url": "https://api.github.com/users/felixgwu/following{/other_user}",
"gists_url": "https://api.github.com/users/felixgwu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/felixgwu",
"id": 7753366,
"login": "felixgwu",
"node_id": "MDQ6VXNlcjc3NTMzNjY=",
"organizations_url": "https://api.github.com/users/felixgwu/orgs",
"received_events_url": "https://api.github.com/users/felixgwu/received_events",
"repos_url": "https://api.github.com/users/felixgwu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/felixgwu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felixgwu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/felixgwu"
} | [] | closed | false | null | [] | null | 0 | "2020-05-16T22:09:39Z" | "2020-05-17T22:22:10Z" | "2020-05-17T22:22:09Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/146.diff",
"html_url": "https://github.com/huggingface/datasets/pull/146",
"merged_at": "2020-05-17T22:22:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/146.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/146"
} | This PR adds [BERTScore](https://arxiv.org/abs/1904.09675) to metrics.
Here is an example of how to use it.
```sh
import nlp
bertscore = nlp.load_metric('metrics/bertscore') # or simply nlp.load_metric('bertscore') after this is added to huggingface's s3 bucket
predictions = ['example', 'fruit']
references = [['this is an example.', 'this is one example.'], ['apple']]
results = bertscore.compute(predictions, references, lang='en')
print(results)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/146/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/146/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/145/comments | https://api.github.com/repos/huggingface/datasets/issues/145/events | https://github.com/huggingface/datasets/pull/145 | 619,480,549 | MDExOlB1bGxSZXF1ZXN0NDE4OTcxMjg0 | 145 | [AWS Tests] Follow-up PR from #144 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 0 | "2020-05-16T13:53:46Z" | "2020-05-16T13:54:23Z" | "2020-05-16T13:54:22Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/145.diff",
"html_url": "https://github.com/huggingface/datasets/pull/145",
"merged_at": "2020-05-16T13:54:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/145.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/145"
} | I forgot to add this line in PR #145 . | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/145/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/145/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/144/comments | https://api.github.com/repos/huggingface/datasets/issues/144/events | https://github.com/huggingface/datasets/pull/144 | 619,477,367 | MDExOlB1bGxSZXF1ZXN0NDE4OTY5NjA1 | 144 | [AWS tests] AWS test should not run for canonical datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 0 | "2020-05-16T13:39:30Z" | "2020-05-16T13:44:34Z" | "2020-05-16T13:44:33Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/144.diff",
"html_url": "https://github.com/huggingface/datasets/pull/144",
"merged_at": "2020-05-16T13:44:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/144.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/144"
} | AWS tests should in general not run for canonical datasets. Only local tests will run in this case. This way a PR is able to pass when adding a new dataset.
This PR changes to logic to the following:
1) All datasets that are present in `nlp/datasets` are tested only locally. This way when one adds a canonical dataset, the PR includes his dataset in the tests.
2) All datasets that are only present on AWS, such as `webis/tl_dr` atm are tested only on AWS.
I think the testing structure might need a bigger refactoring and better documentation very soon.
Merging for now to unblock new PRs @thomwolf @mariamabarham . | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/144/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/144/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/143/comments | https://api.github.com/repos/huggingface/datasets/issues/143/events | https://github.com/huggingface/datasets/issues/143 | 619,457,641 | MDU6SXNzdWU2MTk0NTc2NDE= | 143 | ArrowTypeError in squad metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patil-suraj",
"id": 27137566,
"login": "patil-suraj",
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patil-suraj"
} | [
{
"color": "25b21e",
"default": false,
"description": "A bug in a metric script",
"id": 2067393914,
"name": "metric bug",
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug"
}
] | closed | false | null | [] | null | 1 | "2020-05-16T12:06:37Z" | "2020-05-22T13:38:52Z" | "2020-05-22T13:36:48Z" | CONTRIBUTOR | null | null | null | `squad_metric.compute` is giving following error
```
ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type
```
This is how my predictions and references look like
```
predictions[0]
# {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'}
```
```
references[0]
# {'answers': [{'text': 'Denver Broncos'},
{'text': 'Denver Broncos'},
{'text': 'Denver Broncos'}],
'id': '56be4db0acb8001400a502ec'}
```
These are structured as per the `squad_metric.compute` help string. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/143/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/143/timeline | null | completed | false | [
"There was an issue in the format, thanks.\r\nNow you can do\r\n```python3\r\nsquad_dset = nlp.load_dataset(\"squad\")\r\nsquad_metric = nlp.load_metric(\"/Users/quentinlhoest/Desktop/hf/nlp-bis/metrics/squad\")\r\npredictions = [\r\n {\"id\": v[\"id\"], \"prediction_text\": v[\"answers\"][\"text\"][0]} # take first possible answer\r\n for v in squad_dset[\"validation\"]\r\n]\r\nsquad_metric.compute(predictions, squad_dset[\"validation\"])\r\n```\r\n\r\nand the expected format is \r\n```\r\nArgs:\r\n predictions: List of question-answers dictionaries with the following key-values:\r\n - 'id': id of the question-answer pair as given in the references (see below)\r\n - 'prediction_text': the text of the answer\r\n references: List of question-answers dictionaries with the following key-values:\r\n - 'id': id of the question-answer pair (see above),\r\n - 'answers': a Dict {'text': list of possible texts for the answer, as a list of strings}\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/142/comments | https://api.github.com/repos/huggingface/datasets/issues/142/events | https://github.com/huggingface/datasets/pull/142 | 619,450,068 | MDExOlB1bGxSZXF1ZXN0NDE4OTU0OTc1 | 142 | [WMT] Add all wmt | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 0 | "2020-05-16T11:28:46Z" | "2020-05-17T12:18:21Z" | "2020-05-17T12:18:20Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/142.diff",
"html_url": "https://github.com/huggingface/datasets/pull/142",
"merged_at": "2020-05-17T12:18:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/142.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/142"
} | This PR adds all wmt datasets scripts. At the moment the script is **not** functional for the language pairs "cs-en", "ru-en", "hi-en" because apparently it takes up to a week to get the manual data for these datasets: see http://ufal.mff.cuni.cz/czeng.
The datasets are fully functional though for the "big" language pairs "de-en" and "fr-en".
Overall I think the scripts are very messy and might need a big refactoring at some point.
For now I think there are good to merge (most dataset configs can be used). I will add "cs", "ru" and "hi" when the manual data is available. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/142/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/142/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/141/comments | https://api.github.com/repos/huggingface/datasets/issues/141/events | https://github.com/huggingface/datasets/pull/141 | 619,447,090 | MDExOlB1bGxSZXF1ZXN0NDE4OTUzMzQw | 141 | [Clean up] remove bogus folder | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 2 | "2020-05-16T11:13:42Z" | "2020-05-16T13:24:27Z" | "2020-05-16T13:24:26Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/141.diff",
"html_url": "https://github.com/huggingface/datasets/pull/141",
"merged_at": "2020-05-16T13:24:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/141.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/141"
} | @mariamabarham - I think you accidentally placed it there. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/141/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/141/timeline | null | null | true | [
"Same for the dataset_infos.json at the project root no ?",
"Sorry guys, I haven't noticed. Thank you for mentioning it."
] |
https://api.github.com/repos/huggingface/datasets/issues/140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/140/comments | https://api.github.com/repos/huggingface/datasets/issues/140/events | https://github.com/huggingface/datasets/pull/140 | 619,443,613 | MDExOlB1bGxSZXF1ZXN0NDE4OTUxMzg4 | 140 | [Tests] run local tests as default | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 2 | "2020-05-16T10:56:06Z" | "2020-05-16T13:21:44Z" | "2020-05-16T13:21:43Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/140.diff",
"html_url": "https://github.com/huggingface/datasets/pull/140",
"merged_at": "2020-05-16T13:21:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/140.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/140"
} | This PR also enables local tests by default
I think it's safer for now to enable both local and aws tests for every commit. The problem currently is that when we do a PR to add a dataset, the dataset is not yet on AWS on therefore not tested on the PR itself. Thus the PR will always be green even if the datasets are not correct. This PR aims at fixing this.
## Suggestion on how to commit to the repo from now on:
Now since the repo is "online", I think we should adopt a couple of best practices:
1) - No direct committing to the repo anymore. Every change should be opened in a PR and be well documented so that we can find it later
2) - Every PR has to be reviewed by at least x people (I guess @thomwolf you should decide here) because we now have to be much more careful when doing changes to the API for backward compatibility, etc...
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/140/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/140/timeline | null | null | true | [
"You are right and I think those are usual best practice :) I'm 100% fine with this^^",
"Merging this for now to unblock other PRs."
] |
https://api.github.com/repos/huggingface/datasets/issues/139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/139/comments | https://api.github.com/repos/huggingface/datasets/issues/139/events | https://github.com/huggingface/datasets/pull/139 | 619,327,409 | MDExOlB1bGxSZXF1ZXN0NDE4ODc4NzMy | 139 | Add GermEval 2014 NER dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stefan-it",
"id": 20651387,
"login": "stefan-it",
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stefan-it"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
] | null | 4 | "2020-05-15T23:42:09Z" | "2020-05-16T13:56:37Z" | "2020-05-16T13:56:22Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/139.diff",
"html_url": "https://github.com/huggingface/datasets/pull/139",
"merged_at": "2020-05-16T13:56:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/139.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/139"
} | Hi,
this PR adds the GermEval 2014 NER dataset 😃
> The GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation [1] with the following properties:
> - The data was sampled from German Wikipedia and News Corpora as a collection of citations.
> - The dataset covers over 31,000 sentences corresponding to over 590,000 tokens.
> - The NER annotation uses the NoSta-D guidelines, which extend the Tübingen Treebank guidelines, using four main NER categories with sub-structure, and annotating embeddings among NEs such as [ORG FC Kickers [LOC Darmstadt]].
Dataset will be downloaded from the [official GermEval 2014 website](https://sites.google.com/site/germeval2014ner/data).
## Dataset format
Here's an example of the dataset format from the original dataset:
```tsv
# http://de.wikipedia.org/wiki/Manfred_Korfmann [2009-10-17]
1 Aufgrund O O
2 seiner O O
3 Initiative O O
4 fand O O
5 2001/2002 O O
6 in O O
7 Stuttgart B-LOC O
8 , O O
9 Braunschweig B-LOC O
10 und O O
11 Bonn B-LOC O
12 eine O O
13 große O O
14 und O O
15 publizistisch O O
16 vielbeachtete O O
17 Troia-Ausstellung B-LOCpart O
18 statt O O
19 , O O
20 „ O O
21 Troia B-OTH B-LOC
22 - I-OTH O
23 Traum I-OTH O
24 und I-OTH O
25 Wirklichkeit I-OTH O
26 “ O O
27 . O O
```
The sentence is encoded as one token per line (tab separated columns.
The first column contains either a `#`, which signals the source the sentence is cited from and the date it was retrieved, or the token number within the sentence.
The second column contains the token.
Column three and four contain the named entity (in IOB2 scheme).
Outer spans are encoded in the third column, embedded/nested spans in the fourth column.
## Features
I decided to keep most information from the dataset. That means the so called "source" information (where the sentences come from + date information) is also returned for each sentence in the feature vector.
For each sentence in the dataset, one feature vector (`nlp.Features` definition) will be returned:
| Feature | Example | Description
| ---- | ---- | -----------------
| `id` | `0` | Number (id) of current sentence
| `source` | `http://de.wikipedia.org/wiki/Manfred_Korfmann [2009-10-17]` | URL and retrieval date as string
| `tokens` | `["Schwartau", "sagte", ":"]` | List of tokens (strings) for a sentence
| `labels` | `["B-PER", "O", "O"]` | List of labels (outer span)
| `nested-labels` | `["O", "O", "O"]` | List of labels for nested span
## Example
The following command downloads the dataset from the official GermEval 2014 page and pre-processed it:
```bash
python nlp-cli test datasets/germeval_14 --all_configs
```
It then outputs the number for training, development and testset. The training set consists of 24,000 sentences, the development set of 2,200 and the test of 5,100 sentences.
Now it can be imported and used with `nlp`:
```python
import nlp
germeval = nlp.load_dataset("./datasets/germeval_14/germeval_14.py")
assert len(germeval["train"]) == 24000
# Show first sentence of training set:
germeval["train"][0]
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/139/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/139/timeline | null | null | true | [
"Had really fun playing around with this new library :heart: ",
"That's awesome - thanks @stefan-it :-) \r\n\r\nCould you maybe rebase to master and check if all dummy data tests are fine. I should have included the local tests directly in the test suite so that all PRs are fully checked: #140 - sorry :D ",
"@patrickvonplaten Rebased it 😅\r\n\r\nHow can it test 🤔 I used:\r\n\r\n```bash\r\nRUN_SLOW=1 RUN_LOCAL=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_local_germeval_14\r\n# and\r\nRUN_SLOW=1 RUN_LOCAL=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_dataset_all_configs_local_germeval_14\r\n```\r\n\r\nand the tests still pass :)",
"Perfect, if these tests pass that's great - I'll merge the PR then :-) Was it very difficult to create the dummy data structure? "
] |
https://api.github.com/repos/huggingface/datasets/issues/138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/138/comments | https://api.github.com/repos/huggingface/datasets/issues/138/events | https://github.com/huggingface/datasets/issues/138 | 619,225,191 | MDU6SXNzdWU2MTkyMjUxOTE= | 138 | Consider renaming to nld | {
"avatar_url": "https://avatars.githubusercontent.com/u/8059750?v=4",
"events_url": "https://api.github.com/users/honnibal/events{/privacy}",
"followers_url": "https://api.github.com/users/honnibal/followers",
"following_url": "https://api.github.com/users/honnibal/following{/other_user}",
"gists_url": "https://api.github.com/users/honnibal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/honnibal",
"id": 8059750,
"login": "honnibal",
"node_id": "MDQ6VXNlcjgwNTk3NTA=",
"organizations_url": "https://api.github.com/users/honnibal/orgs",
"received_events_url": "https://api.github.com/users/honnibal/received_events",
"repos_url": "https://api.github.com/users/honnibal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/honnibal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/honnibal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/honnibal"
} | [
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | closed | false | null | [] | null | 13 | "2020-05-15T20:23:27Z" | "2022-09-16T05:18:22Z" | "2020-09-28T00:08:10Z" | NONE | null | null | null | Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This means the package makes `nlp` a bad variable name everywhere in the codebase. I've always used `nlp` as the canonical variable name of spaCy's `Language` objects, and this is a convention that a lot of other code has followed (Stanza, flair, etc). And actually, your `transformers` library uses `nlp` as the name for its `Pipeline` instance in your readme.
If you stick with the `nlp` name for this package, if anyone uses it then they should rewrite all of that code. If `nlp` is a bad choice of variable anywhere, it's a bad choice of variable everywhere --- because you shouldn't have to notice whether some other function uses a module when you're naming variables within a function. You want to have one convention that you can stick to everywhere.
If people use your `nlp` package and continue to use the `nlp` variable name, they'll find themselves with confusing bugs. There will be many many bits of code cut-and-paste from tutorials that give confusing results when combined with the data loading from the `nlp` library. The problem will be especially bad for shadowed modules (people might reasonably have a module named `nlp.py` within their codebase) and notebooks, as people might run notebook cells for data loading out-of-order.
I don't think it's an exaggeration to say that if your library becomes popular, we'll all be answering issues around this about once a week for the next few years. That seems pretty unideal, so I do hope you'll reconsider.
I suggest `nld` as a better name. It more accurately represents what the package actually does. It's pretty unideal to have a package named `nlp` that doesn't do any processing, and contains data about natural language generation or other non-NLP tasks. The name is equally short, and is sort of a visual pun on `nlp`, since a d is a rotated p. | {
"+1": 33,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 33,
"url": "https://api.github.com/repos/huggingface/datasets/issues/138/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/138/timeline | null | completed | false | [
"I would suggest `nlds`. NLP is a very general, broad and ambiguous term, the library is not about NLP (as in processing) per se, it is about accessing Natural Language related datasets. So the name should reflect its purpose.\r\n",
"Chiming in to second everything @honnibal said, and to add that I think the current name is going to impact the discoverability of this library. People who are looking for \"NLP Datasets\" through a search engine are going to see a library called `nlp` and think it's too broad. People who are looking to do NLP in python are going to search \"Python NLP\" and end up here, confused that this is a collection of datasets.\r\n\r\nThe names of the other huggingface libraries work because they're the only game in town: there are not very many robust, distinct libraries for `tokenizers` or `transformers` in python, for example. But there are several options for NLP in python, and adding this as a possible search result for \"python nlp\" when datasets are likely not what someone is searching for adds noise and frustrates potential users.",
"I'm also not sure whether the naming of `nlp` is the problem itself, as long as it comes with the appropriate identifier, so maybe something like `huggingface_nlp`? This is analogous to what @honnibal and spacy are doing for `spacy-transformers`. Of course, this is a \"step back\" from the recent changes/renaming of transformers, but may be some middle ground between a complete rebranding, and keeping it identifiable.",
"Interesting, thanks for sharing your thoughts.\r\n\r\nAs we’ll move toward a first non-beta release, we will pool the community of contributors/users of the library for their opinions on a good final name (like when we renamed the beautifully (?) named `pytorch-pretrained-bert`)\r\n\r\nIn the meantime, using `from nlp import load_dataset, load_metric` should work 😉",
"I feel like we are conflating two distinct subjects here:\r\n\r\n1. @honnibal's point is that using `nlp` as a package name might break existing code and bring developer usability issues in the future\r\n2. @pmbaumgartner's point is that the `nlp` package name is too broad and shouldn't be used by a package that exposes only datasets and metrics\r\n\r\n(let me know if I mischaracterize your point)\r\n\r\nI'll chime in to say that the first point is a bit silly IMO. As Python developers due to the limitations of the import system we already have to share:\r\n- a single flat namespace for packages\r\n- which also conflicts with local modules i.e. local files\r\n\r\nIf we add the constraint that this flat namespace also be shared with variable names this gets untractable pretty fast :)\r\n\r\nI also think all Python software developers/ML engineers/scientists are capable of at least a subset of:\r\n- importing only the methods that they need like @thomwolf suggested\r\n- aliasing their import\r\n- renaming a local variable",
"By the way, `nlp` will very likely not be only about datasets, and not even just about datasets and metrics.\r\n\r\nI see it as a laboratory for testing several long-term ideas about how we could do NLP in terms of research as well as open-source and community sharing, most of these ideas being too experimental/big to fit in `transformers`.\r\n\r\nSome of the directions we would like to explore are about sharing, traceability and more experimental models, as well as seeing a model as the community-based process of creating a composite entity from data, optimization, and code.\r\n\r\nWe'll see how these ideas end up being implemented and we'll better know how we should define the library when we start to dive into these topics. I'll try to get the `nlp` team to draft a roadmap on these topics at some point.",
"> If we add the constraint that this flat namespace also be shared with variable names this gets untractable pretty fast :)\r\n\r\nI'm sort of confused by your point here. The namespace *is* shared by variable names. You should not use local variables that are named the same as modules, because then you cannot use the module within the scope of your function.\r\n\r\nFor instance,\r\n\r\n```python\r\n\r\nimport nlp\r\nimport transformers\r\n\r\nnlp = transformers.pipeline(\"sentiment-analysis\")\r\n```\r\n\r\nThis is a bug: you've just overwritten the module, so now you can't use it. Or instead:\r\n\r\n```python\r\n\r\nimport transformers\r\n\r\nnlp = transformers.pipeline(\"sentiment-analysis\")\r\n# (Later, e.g. in a notebook)\r\nimport nlp\r\n```\r\n\r\nThis is also a bug: you've overwritten your variable with an import.\r\n\r\nIf you have a module named `nlp`, you should avoid using `nlp` as a variable, or you'll have bugs in some contexts and inconsistencies in other contexts. You'll have situations where you need to import differently in one module vs another, or name variables differently in one context vs another, which is bad.\r\n\r\n> importing only the methods that they need like @thomwolf suggested\r\n\r\nOkay but the same logic applies to naming the module *literally anything else*. There's absolutely no point in having a module name that's 3 letters if you always plan to do `import from`! It would be entirely better to name it `nlp_datasets` if you don't want people to do `import nlp`.\r\n\r\nAnd finally:\r\n\r\n> By the way, nlp will very likely not be only about datasets, and not even just about datasets and metrics.\r\n\r\nSo...it isn't a datasets library? https://twitter.com/Thom_Wolf/status/1261282491622731781\r\n\r\nI'm confused 😕 ",
"Dropping by as I noticed that the library has been renamed `datasets` so I wonder if the conversation above is settled (`nlp` not used anymore) :) ",
"I guess indeed",
"I'd argue that `datasets` is worse than `nlp`. Datasets should be a user specific decision and not encapsulate all of python (`pip install datasets`). If this package contained every dataset in the world (NLP / vision / etc) then it would make sense =/",
"I can't speak for the HF team @jramapuram, but as member of the community it looks to me that HF wanted to avoid the past path of changing names as scope broadened over time:\r\n\r\nRemember\r\nhttps://github.com/huggingface/pytorch-openai-transformer-lm\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT\r\nhttps://github.com/huggingface/pytorch-transformers\r\nand now\r\nhttps://github.com/huggingface/transformers\r\n\r\n;) \r\n\r\nJokes aside, seems that the library is growing in a multi-modal direction (https://github.com/huggingface/datasets/pull/363) so the current name is not that implausible. Possibly HF ambition is really to grow its community and bring here a large chunk of datasets of the world (including tabular / vision / audio?).",
"Yea I see your point. However, wouldn't scoping solve the entire problem? \r\n\r\n```python\r\nimport huggingface.datasets as D\r\nimport huggingface.transformers as T\r\n```\r\n\r\nCalling something `datasets` is akin to saying I'm going to name my package `python` --> `import python` ",
"Sorry to reply to an old thread, but the name issue really makes troubles recently in my project.\r\n\r\nI'd never known in advance there's a package called \"datasets\". My first thought is that such a general term may be safe to arbitrarily use. Avoiding such a common name because of its ambiguity is quite weird.\r\n\r\nAs we know in python it's not easy to differentiate system-wide and project-wide import like in C and C++.\r\n\r\nOn the contrary I fully understand the challenge to rename a popular library. So it seems to provide a \"huggingface\" wrapper library as suggested above by @jramapuram may be a happy medium for both developers and users.\r\n\r\nBest Regards."
] |
https://api.github.com/repos/huggingface/datasets/issues/136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/136/comments | https://api.github.com/repos/huggingface/datasets/issues/136/events | https://github.com/huggingface/datasets/pull/136 | 619,211,018 | MDExOlB1bGxSZXF1ZXN0NDE4NzgxNzI4 | 136 | Update README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/75369?v=4",
"events_url": "https://api.github.com/users/renaud/events{/privacy}",
"followers_url": "https://api.github.com/users/renaud/followers",
"following_url": "https://api.github.com/users/renaud/following{/other_user}",
"gists_url": "https://api.github.com/users/renaud/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/renaud",
"id": 75369,
"login": "renaud",
"node_id": "MDQ6VXNlcjc1MzY5",
"organizations_url": "https://api.github.com/users/renaud/orgs",
"received_events_url": "https://api.github.com/users/renaud/received_events",
"repos_url": "https://api.github.com/users/renaud/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/renaud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/renaud/subscriptions",
"type": "User",
"url": "https://api.github.com/users/renaud"
} | [] | closed | false | null | [] | null | 1 | "2020-05-15T20:01:07Z" | "2020-05-17T12:17:28Z" | "2020-05-17T12:17:28Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/136.diff",
"html_url": "https://github.com/huggingface/datasets/pull/136",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/136.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/136"
} | small typo | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/136/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/136/timeline | null | null | true | [
"Thanks, this was fixed with #135 :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/135/comments | https://api.github.com/repos/huggingface/datasets/issues/135/events | https://github.com/huggingface/datasets/pull/135 | 619,206,708 | MDExOlB1bGxSZXF1ZXN0NDE4Nzc4MTMw | 135 | Fix print statement in READ.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/51091425?v=4",
"events_url": "https://api.github.com/users/codehunk628/events{/privacy}",
"followers_url": "https://api.github.com/users/codehunk628/followers",
"following_url": "https://api.github.com/users/codehunk628/following{/other_user}",
"gists_url": "https://api.github.com/users/codehunk628/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/codehunk628",
"id": 51091425,
"login": "codehunk628",
"node_id": "MDQ6VXNlcjUxMDkxNDI1",
"organizations_url": "https://api.github.com/users/codehunk628/orgs",
"received_events_url": "https://api.github.com/users/codehunk628/received_events",
"repos_url": "https://api.github.com/users/codehunk628/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/codehunk628/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codehunk628/subscriptions",
"type": "User",
"url": "https://api.github.com/users/codehunk628"
} | [] | closed | false | null | [] | null | 1 | "2020-05-15T19:52:23Z" | "2020-05-17T12:14:06Z" | "2020-05-17T12:14:05Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/135.diff",
"html_url": "https://github.com/huggingface/datasets/pull/135",
"merged_at": "2020-05-17T12:14:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/135.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/135"
} | print statement was throwing generator object instead of printing names of available datasets/metrics | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/135/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/135/timeline | null | null | true | [
"Indeed, thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/134/comments | https://api.github.com/repos/huggingface/datasets/issues/134/events | https://github.com/huggingface/datasets/pull/134 | 619,112,641 | MDExOlB1bGxSZXF1ZXN0NDE4Njk5OTYz | 134 | Update README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/8753078?v=4",
"events_url": "https://api.github.com/users/pranv/events{/privacy}",
"followers_url": "https://api.github.com/users/pranv/followers",
"following_url": "https://api.github.com/users/pranv/following{/other_user}",
"gists_url": "https://api.github.com/users/pranv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pranv",
"id": 8753078,
"login": "pranv",
"node_id": "MDQ6VXNlcjg3NTMwNzg=",
"organizations_url": "https://api.github.com/users/pranv/orgs",
"received_events_url": "https://api.github.com/users/pranv/received_events",
"repos_url": "https://api.github.com/users/pranv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pranv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pranv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pranv"
} | [] | closed | false | null | [] | null | 1 | "2020-05-15T16:56:14Z" | "2020-05-28T08:21:49Z" | "2020-05-28T08:21:49Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/134.diff",
"html_url": "https://github.com/huggingface/datasets/pull/134",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/134.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/134"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/134/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/134/timeline | null | null | true | [
"the readme got removed, closing this one"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/133/comments | https://api.github.com/repos/huggingface/datasets/issues/133/events | https://github.com/huggingface/datasets/issues/133 | 619,094,954 | MDU6SXNzdWU2MTkwOTQ5NTQ= | 133 | [Question] Using/adding a local dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"events_url": "https://api.github.com/users/zphang/events{/privacy}",
"followers_url": "https://api.github.com/users/zphang/followers",
"following_url": "https://api.github.com/users/zphang/following{/other_user}",
"gists_url": "https://api.github.com/users/zphang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zphang",
"id": 1668462,
"login": "zphang",
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"organizations_url": "https://api.github.com/users/zphang/orgs",
"received_events_url": "https://api.github.com/users/zphang/received_events",
"repos_url": "https://api.github.com/users/zphang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zphang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zphang"
} | [] | closed | false | null | [] | null | 5 | "2020-05-15T16:26:06Z" | "2020-07-23T16:44:09Z" | "2020-07-23T16:44:09Z" | NONE | null | null | null | Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets.
It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this.
A notebook/example script demonstrating this would be very helpful. | {
"+1": 6,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/133/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/133/timeline | null | completed | false | [
"Hi @zphang,\r\n\r\nSo you can just give the local path to a dataset script file and it should work.\r\n\r\nHere is an example:\r\n- you can download one of the scripts in the `datasets` folder of the present repo (or clone the repo)\r\n- then you can load it with `load_dataset('PATH/TO/YOUR/LOCAL/SCRIPT.py')`\r\n\r\nDoes it make sense?",
"Could you give a more concrete example, please? \r\n\r\nI looked up wikitext dataset script from the repo. Should I just overwrite the `data_file` on line 98 to point to the local dataset directory? Would it work for different configurations of wikitext (wikitext2, wikitext103 etc.)?\r\n\r\nOr maybe we can use DownloadManager to specify local dataset location? In that case, where do we use DownloadManager instance?\r\n\r\nThanks",
"Hi @MaveriQ , although what I am doing is to commit a new dataset, but I think looking at imdb script might help.\r\nYou may want to use `dl_manager.download_custom`, give it a url(arbitrary string), a custom_download(arbitrary function) and return a path, and finally use _get sample to fetch a sample.",
"The download manager supports local directories. You can specify a local directory instead of a url and it should work.",
"Closing this one.\r\nFeel free to re-open if you have other questions :)"
] |