id
int64
599M
2.47B
url
stringlengths
58
61
repository_url
stringclasses
1 value
events_url
stringlengths
65
68
labels
listlengths
0
4
active_lock_reason
null
updated_at
stringlengths
20
20
assignees
listlengths
0
4
html_url
stringlengths
46
51
author_association
stringclasses
4 values
state_reason
stringclasses
3 values
draft
bool
2 classes
milestone
dict
comments
sequencelengths
0
30
title
stringlengths
1
290
reactions
dict
node_id
stringlengths
18
32
pull_request
dict
created_at
stringlengths
20
20
comments_url
stringlengths
67
70
body
stringlengths
0
228k
user
dict
labels_url
stringlengths
72
75
timeline_url
stringlengths
67
70
state
stringclasses
2 values
locked
bool
1 class
number
int64
1
7.11k
performed_via_github_app
null
closed_at
stringlengths
20
20
assignee
dict
is_pull_request
bool
2 classes
671,996,423
https://api.github.com/repos/huggingface/datasets/issues/471
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/471/events
[]
null
2022-08-04T08:39:11Z
[]
https://github.com/huggingface/datasets/pull/471
CONTRIBUTOR
null
false
null
[]
add reuters21578 dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/471/reactions" }
MDExOlB1bGxSZXF1ZXN0NDYyMTExNTU1
{ "diff_url": "https://github.com/huggingface/datasets/pull/471.diff", "html_url": "https://github.com/huggingface/datasets/pull/471", "merged_at": "2020-09-03T09:58:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/471.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/471" }
2020-08-03T11:07:14Z
https://api.github.com/repos/huggingface/datasets/issues/471/comments
new PR to add the reuters21578 dataset and fix the circle CI problems. Fix partially: - #353 Subsequent PR after: - #449
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
https://api.github.com/repos/huggingface/datasets/issues/471/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/471/timeline
closed
false
471
null
2020-09-03T09:58:50Z
null
true
671,952,276
https://api.github.com/repos/huggingface/datasets/issues/470
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/470/events
[]
null
2020-09-07T12:33:30Z
[]
https://github.com/huggingface/datasets/pull/470
CONTRIBUTOR
null
false
null
[]
Adding IWSLT 2017 dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/470/reactions" }
MDExOlB1bGxSZXF1ZXN0NDYyMDc0MzQ0
{ "diff_url": "https://github.com/huggingface/datasets/pull/470.diff", "html_url": "https://github.com/huggingface/datasets/pull/470", "merged_at": "2020-09-07T12:33:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/470.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/470" }
2020-08-03T09:52:39Z
https://api.github.com/repos/huggingface/datasets/issues/470/comments
Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*. ``` Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair) ``` I'm unsure how to handle bilingual vs multilingual. Given `nlp` architecture a Config option seems to be the way to go, however, it might be a bit confusing to have different language pairs with different option. Using just language pairs is not viable as English to German exists in both. Any opinion on how that should be done ? EDIT: I decided to just omit de-en from multilingual as it's only a subset of the bilingual one. That way only language pairs exist. EDIT : Could be interesting for #438
{ "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Narsil", "id": 204321, "login": "Narsil", "node_id": "MDQ6VXNlcjIwNDMyMQ==", "organizations_url": "https://api.github.com/users/Narsil/orgs", "received_events_url": "https://api.github.com/users/Narsil/received_events", "repos_url": "https://api.github.com/users/Narsil/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "type": "User", "url": "https://api.github.com/users/Narsil" }
https://api.github.com/repos/huggingface/datasets/issues/470/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/470/timeline
closed
false
470
null
2020-09-07T12:33:30Z
null
true
671,876,963
https://api.github.com/repos/huggingface/datasets/issues/469
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/469/events
[]
null
2023-07-20T15:54:17Z
[]
https://github.com/huggingface/datasets/issues/469
NONE
completed
null
null
[]
invalid data type 'str' at _convert_outputs in arrow_dataset.py
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/469/reactions" }
MDU6SXNzdWU2NzE4NzY5NjM=
null
2020-08-03T07:48:29Z
https://api.github.com/repos/huggingface/datasets/issues/469/comments
I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert_outputs v = command(v) TypeError: new(): invalid data type 'str' I'm using pyarrow 1.0.0. And I have simple custom data set with Text and Integer Label. Ex: Data Text , Label #Column Header I'm facing an Network issue, 1 I forgot my password, 2 Error StackTrace: File "C:\**\transformers\trainer.py", line 492, in train for step, inputs in enumerate(epoch_iterator): File "C:\**\tqdm\std.py", line 1104, in __iter__ for obj in iterable: File "C:\**\torch\utils\data\dataloader.py", line 345, in __next__ data = self._next_data() File "C:\**\torch\utils\data\dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "C:\**\torch\utils\data\_utils\fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "C:\**\torch\utils\data\_utils\fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "C:\**\nlp\arrow_dataset.py", line 414, in __getitem__ output_all_columns=self._output_all_columns, File "C:\**\nlp\arrow_dataset.py", line 403, in _getitem outputs, format_type=format_type, format_columns=format_columns, output_all_columns=output_all_columns File "C:\**\nlp\arrow_dataset.py", line 343, in _convert_outputs v = command(v) TypeError: new(): invalid data type 'str'
{ "avatar_url": "https://avatars.githubusercontent.com/u/30617486?v=4", "events_url": "https://api.github.com/users/Murgates/events{/privacy}", "followers_url": "https://api.github.com/users/Murgates/followers", "following_url": "https://api.github.com/users/Murgates/following{/other_user}", "gists_url": "https://api.github.com/users/Murgates/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Murgates", "id": 30617486, "login": "Murgates", "node_id": "MDQ6VXNlcjMwNjE3NDg2", "organizations_url": "https://api.github.com/users/Murgates/orgs", "received_events_url": "https://api.github.com/users/Murgates/received_events", "repos_url": "https://api.github.com/users/Murgates/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Murgates/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Murgates/subscriptions", "type": "User", "url": "https://api.github.com/users/Murgates" }
https://api.github.com/repos/huggingface/datasets/issues/469/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/469/timeline
closed
false
469
null
2023-07-20T15:54:17Z
null
false
671,622,441
https://api.github.com/repos/huggingface/datasets/issues/468
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/468/events
[]
null
2020-08-20T08:16:08Z
[]
https://github.com/huggingface/datasets/issues/468
MEMBER
completed
null
null
[]
UnicodeDecodeError while loading PAN-X task of XTREME dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/468/reactions" }
MDU6SXNzdWU2NzE2MjI0NDE=
null
2020-08-02T14:05:10Z
https://api.github.com/repos/huggingface/datasets/issues/468/comments
Hi 🤗 team! ## Description of the problem I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset: ``` --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-input-5-1d61f439b843> in <module> ----> 1 dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 528 ignore_verifications = ignore_verifications or save_infos 529 # Download/copy dataset processing script --> 530 module_path, hash = prepare_module(path, download_config=download_config, dataset=True) 531 532 # Get dataset builder class from the processing script /usr/local/lib/python3.6/dist-packages/nlp/load.py in prepare_module(path, download_config, dataset, force_local_path, **download_kwargs) 265 266 # Download external imports if needed --> 267 imports = get_imports(local_path) 268 local_imports = [] 269 library_imports = [] /usr/local/lib/python3.6/dist-packages/nlp/load.py in get_imports(file_path) 156 lines = [] 157 with open(file_path, mode="r") as f: --> 158 lines.extend(f.readlines()) 159 160 logger.info("Checking %s for additional imports.", file_path) /usr/lib/python3.6/encodings/ascii.py in decode(self, input, final) 24 class IncrementalDecoder(codecs.IncrementalDecoder): 25 def decode(self, input, final=False): ---> 26 return codecs.ascii_decode(input, self.errors)[0] 27 28 class StreamWriter(Codec,codecs.StreamWriter): UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 111: ordinal not in range(128) ``` ## Steps to reproduce Install from nlp's master branch ```python pip install git+https://github.com/huggingface/nlp.git ``` then run ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') ``` ## OS / platform details - `nlp` version: latest from master - Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.1.0 (True) - Using GPU in script?: True - Using distributed or parallel set-up in script?: False ## Proposed solution Either change [line 762](https://github.com/huggingface/nlp/blob/7ada00b1d62f94eee22a7df38c6b01e3f27194b7/datasets/xtreme/xtreme.py#L762) in `xtreme.py` to include UTF-8 encoding: ``` # old with open(filepath) as f # new with open(filepath, encoding='utf-8') as f ``` or raise a warning that suggests setting the locale explicitly, e.g. ```python import locale locale.setlocale(locale.LC_ALL, 'C.UTF-8') ``` I have a preference for the first solution. Let me know if you agree and I'll be happy to implement the simple fix!
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/468/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/468/timeline
closed
false
468
null
2020-08-20T08:16:08Z
null
false
671,580,010
https://api.github.com/repos/huggingface/datasets/issues/467
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/467/events
[]
null
2020-08-02T13:52:27Z
[]
https://github.com/huggingface/datasets/pull/467
CONTRIBUTOR
null
false
null
[]
DOCS: Fix typo
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/467/reactions" }
MDExOlB1bGxSZXF1ZXN0NDYxNzgwMzUy
{ "diff_url": "https://github.com/huggingface/datasets/pull/467.diff", "html_url": "https://github.com/huggingface/datasets/pull/467", "merged_at": "2020-08-02T09:18:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/467.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/467" }
2020-08-02T08:59:37Z
https://api.github.com/repos/huggingface/datasets/issues/467/comments
Fix typo from dictionnary -> dictionary
{ "avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4", "events_url": "https://api.github.com/users/bharatr21/events{/privacy}", "followers_url": "https://api.github.com/users/bharatr21/followers", "following_url": "https://api.github.com/users/bharatr21/following{/other_user}", "gists_url": "https://api.github.com/users/bharatr21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bharatr21", "id": 13381361, "login": "bharatr21", "node_id": "MDQ6VXNlcjEzMzgxMzYx", "organizations_url": "https://api.github.com/users/bharatr21/orgs", "received_events_url": "https://api.github.com/users/bharatr21/received_events", "repos_url": "https://api.github.com/users/bharatr21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bharatr21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bharatr21/subscriptions", "type": "User", "url": "https://api.github.com/users/bharatr21" }
https://api.github.com/repos/huggingface/datasets/issues/467/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/467/timeline
closed
false
467
null
2020-08-02T09:18:54Z
null
true
670,766,891
https://api.github.com/repos/huggingface/datasets/issues/466
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/466/events
[]
null
2020-08-17T15:15:00Z
[]
https://github.com/huggingface/datasets/pull/466
MEMBER
null
false
null
[]
[METRICS] Various improvements on metrics
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/466/reactions" }
MDExOlB1bGxSZXF1ZXN0NDYxMDEzOTM0
{ "diff_url": "https://github.com/huggingface/datasets/pull/466.diff", "html_url": "https://github.com/huggingface/datasets/pull/466", "merged_at": "2020-08-17T15:14:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/466.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/466" }
2020-08-01T11:03:45Z
https://api.github.com/repos/huggingface/datasets/issues/466/comments
- Disallow the use of positional arguments to avoid `predictions` vs `references` mistakes - Allow to directly feed numpy/pytorch/tensorflow/pandas objects in metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
https://api.github.com/repos/huggingface/datasets/issues/466/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/466/timeline
closed
false
466
null
2020-08-17T15:14:59Z
null
true
669,889,779
https://api.github.com/repos/huggingface/datasets/issues/465
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/465/events
[]
null
2020-07-31T18:27:33Z
[]
https://github.com/huggingface/datasets/pull/465
MEMBER
null
false
null
[]
Keep features after transform
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/465/reactions" }
MDExOlB1bGxSZXF1ZXN0NDYwMjEwODYw
{ "diff_url": "https://github.com/huggingface/datasets/pull/465.diff", "html_url": "https://github.com/huggingface/datasets/pull/465", "merged_at": "2020-07-31T18:27:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/465.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/465" }
2020-07-31T14:43:21Z
https://api.github.com/repos/huggingface/datasets/issues/465/comments
When applying a transform like `map`, some features were lost (and inferred features were used). It was the case for ClassLabel, Translation, etc. To fix that, I did some modifications in the `ArrowWriter`: - added the `update_features` parameter. When it's `True`, then the features specified by the user (if any) can be updated with inferred features if their type don't match. `map` transform sets `update_features=True` when writing to cache file or buffer. Features won't change by default in `map`. - added the `with_metadata` parameter. If `True`, the `features` (after update) will be written inside the metadata of the schema in this format: ``` { "huggingface": {"features" : <serialized Features exactly like dataset_info.json>} } ``` Then, once a dataset is instantiated without info/features, these metadata are used to set the features of the dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/465/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/465/timeline
closed
false
465
null
2020-07-31T18:27:32Z
null
true
669,767,381
https://api.github.com/repos/huggingface/datasets/issues/464
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/464/events
[]
null
2020-07-31T15:50:02Z
[]
https://github.com/huggingface/datasets/pull/464
MEMBER
null
false
null
[]
Add rename, remove and cast in-place operations
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/464/reactions" }
MDExOlB1bGxSZXF1ZXN0NDYwMTAxNDYz
{ "diff_url": "https://github.com/huggingface/datasets/pull/464.diff", "html_url": "https://github.com/huggingface/datasets/pull/464", "merged_at": "2020-07-31T15:50:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/464.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/464" }
2020-07-31T12:30:21Z
https://api.github.com/repos/huggingface/datasets/issues/464/comments
Add a bunch of in-place operation leveraging the Arrow back-end to rename and remove columns and cast to new features without using the more expensive `map` method. These methods are added to `Dataset` as well as `DatasetDict`. Added tests for these new methods and add the methods to the doc. Naming follows the new pattern with a trailing underscore indicating in-place methods.
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
https://api.github.com/repos/huggingface/datasets/issues/464/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/464/timeline
closed
false
464
null
2020-07-31T15:50:00Z
null
true
669,735,455
https://api.github.com/repos/huggingface/datasets/issues/463
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/463/events
[]
null
2020-08-24T14:54:42Z
[]
https://github.com/huggingface/datasets/pull/463
CONTRIBUTOR
null
false
null
[]
Add dataset/mlsum
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/463/reactions" }
MDExOlB1bGxSZXF1ZXN0NDYwMDcyNjQ1
{ "diff_url": "https://github.com/huggingface/datasets/pull/463.diff", "html_url": "https://github.com/huggingface/datasets/pull/463", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/463.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/463" }
2020-07-31T11:50:52Z
https://api.github.com/repos/huggingface/datasets/issues/463/comments
New pull request that should correct the previous errors. The load_real_data stills fails because it is looking for a default dataset URL that does not exists, this does not happen when loading the dataset with load_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/36986299?v=4", "events_url": "https://api.github.com/users/RachelKer/events{/privacy}", "followers_url": "https://api.github.com/users/RachelKer/followers", "following_url": "https://api.github.com/users/RachelKer/following{/other_user}", "gists_url": "https://api.github.com/users/RachelKer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RachelKer", "id": 36986299, "login": "RachelKer", "node_id": "MDQ6VXNlcjM2OTg2Mjk5", "organizations_url": "https://api.github.com/users/RachelKer/orgs", "received_events_url": "https://api.github.com/users/RachelKer/received_events", "repos_url": "https://api.github.com/users/RachelKer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RachelKer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RachelKer/subscriptions", "type": "User", "url": "https://api.github.com/users/RachelKer" }
https://api.github.com/repos/huggingface/datasets/issues/463/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/463/timeline
closed
false
463
null
2020-08-24T14:54:42Z
null
true
669,715,547
https://api.github.com/repos/huggingface/datasets/issues/462
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/462/events
[]
null
2023-09-24T09:48:42Z
[]
https://github.com/huggingface/datasets/pull/462
CONTRIBUTOR
null
false
null
[]
add DoQA (ACL 2020) dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/462/reactions" }
MDExOlB1bGxSZXF1ZXN0NDYwMDU0NDgz
{ "diff_url": "https://github.com/huggingface/datasets/pull/462.diff", "html_url": "https://github.com/huggingface/datasets/pull/462", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/462.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/462" }
2020-07-31T11:25:56Z
https://api.github.com/repos/huggingface/datasets/issues/462/comments
adds DoQA (ACL 2020) dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
https://api.github.com/repos/huggingface/datasets/issues/462/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/462/timeline
closed
false
462
null
2020-08-03T11:28:27Z
null
true
669,703,508
https://api.github.com/repos/huggingface/datasets/issues/461
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/461/events
[]
null
2023-09-24T09:48:40Z
[]
https://github.com/huggingface/datasets/pull/461
CONTRIBUTOR
null
false
null
[]
Doqa
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/461/reactions" }
MDExOlB1bGxSZXF1ZXN0NDYwMDQzNDY5
{ "diff_url": "https://github.com/huggingface/datasets/pull/461.diff", "html_url": "https://github.com/huggingface/datasets/pull/461", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/461.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/461" }
2020-07-31T11:11:12Z
https://api.github.com/repos/huggingface/datasets/issues/461/comments
add DoQA (ACL 2020) dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
https://api.github.com/repos/huggingface/datasets/issues/461/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/461/timeline
closed
false
461
null
2020-07-31T11:13:15Z
null
true
669,585,256
https://api.github.com/repos/huggingface/datasets/issues/460
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/460/events
[]
null
2020-07-31T11:32:19Z
[]
https://github.com/huggingface/datasets/pull/460
MEMBER
null
false
null
[]
Fix KeyboardInterrupt in map and bad indices in select
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/460/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU5OTM2OTU2
{ "diff_url": "https://github.com/huggingface/datasets/pull/460.diff", "html_url": "https://github.com/huggingface/datasets/pull/460", "merged_at": "2020-07-31T11:32:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/460.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/460" }
2020-07-31T08:57:15Z
https://api.github.com/repos/huggingface/datasets/issues/460/comments
If you interrupted a map function while it was writing, the cached file was not discarded. Therefore the next time you called map, it was loading an incomplete arrow file. We had the same issue with select if there was a bad indice at one point. To fix that I used temporary files that are renamed once everything is finished.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/460/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/460/timeline
closed
false
460
null
2020-07-31T11:32:18Z
null
true
669,545,437
https://api.github.com/repos/huggingface/datasets/issues/459
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/459/events
[]
null
2020-08-26T08:28:36Z
[]
https://github.com/huggingface/datasets/pull/459
MEMBER
null
false
null
[]
[Breaking] Update Dataset and DatasetDict API
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/459/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU5OTAxMjEy
{ "diff_url": "https://github.com/huggingface/datasets/pull/459.diff", "html_url": "https://github.com/huggingface/datasets/pull/459", "merged_at": "2020-08-26T08:28:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/459.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/459" }
2020-07-31T08:11:33Z
https://api.github.com/repos/huggingface/datasets/issues/459/comments
This PR contains a few breaking changes so it's probably good to keep it for the next (major) release: - rename the `flatten`, `drop` and `dictionary_encode_column` methods in `flatten_`, `drop_` and `dictionary_encode_column_` to indicate that these methods have in-place effects as discussed in #166. From now on we should keep the convention of having a trailing underscore for methods which have an in-place effet. I also adopt the conversion of not returning the (self) dataset for these methods. This is different than what PyTorch does for instance (`model.to()` is in-place but return the self model) but I feel like it's a safer approach in terms of UX. - remove the `dataset.columns` property which returns a low-level Apache Arrow object and should not be used by users. Similarly, remove `dataset. nbytes` which we don't really want to expose in this bare-bone format. - add a few more properties and methods to `DatasetDict`
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
https://api.github.com/repos/huggingface/datasets/issues/459/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/459/timeline
closed
false
459
null
2020-08-26T08:28:35Z
null
true
668,972,666
https://api.github.com/repos/huggingface/datasets/issues/458
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/458/events
[]
null
2020-07-31T13:56:33Z
[]
https://github.com/huggingface/datasets/pull/458
MEMBER
null
false
null
[]
Install CoVal metric from github
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/458/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU5Mzk5ODg2
{ "diff_url": "https://github.com/huggingface/datasets/pull/458.diff", "html_url": "https://github.com/huggingface/datasets/pull/458", "merged_at": "2020-07-31T13:56:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/458.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/458" }
2020-07-30T16:59:25Z
https://api.github.com/repos/huggingface/datasets/issues/458/comments
Changed the import statements in `coval.py` to direct the user to install the original package from github if it's not already installed (the warning will only display properly after merging [PR455](https://github.com/huggingface/nlp/pull/455)) Also changed the function call to use named rather than positional arguments.
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
https://api.github.com/repos/huggingface/datasets/issues/458/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/458/timeline
closed
false
458
null
2020-07-31T13:56:33Z
null
true
668,898,386
https://api.github.com/repos/huggingface/datasets/issues/457
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/457/events
[]
null
2020-07-30T17:34:36Z
[]
https://github.com/huggingface/datasets/pull/457
MEMBER
null
false
null
[]
add set_format to DatasetDict + tests
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/457/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU5MzMyOTM1
{ "diff_url": "https://github.com/huggingface/datasets/pull/457.diff", "html_url": "https://github.com/huggingface/datasets/pull/457", "merged_at": "2020-07-30T17:34:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/457.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/457" }
2020-07-30T15:53:20Z
https://api.github.com/repos/huggingface/datasets/issues/457/comments
Add the `set_format` and `formated_as` and `reset_format` to `DatasetDict`. Add tests to these for `Dataset` and `DatasetDict`. Fix some bugs uncovered by the tests for `pandas` formating.
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
https://api.github.com/repos/huggingface/datasets/issues/457/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/457/timeline
closed
false
457
null
2020-07-30T17:34:34Z
null
true
668,723,785
https://api.github.com/repos/huggingface/datasets/issues/456
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/456/events
[]
null
2023-09-24T09:48:47Z
[]
https://github.com/huggingface/datasets/pull/456
CONTRIBUTOR
null
false
null
[]
add crd3(ACL 2020) dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/456/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU5MTc1MTY0
{ "diff_url": "https://github.com/huggingface/datasets/pull/456.diff", "html_url": "https://github.com/huggingface/datasets/pull/456", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/456.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/456" }
2020-07-30T13:28:35Z
https://api.github.com/repos/huggingface/datasets/issues/456/comments
This PR adds the **Critical Role Dungeons and Dragons Dataset** published at ACL 2020
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
https://api.github.com/repos/huggingface/datasets/issues/456/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/456/timeline
closed
false
456
null
2020-08-03T11:28:52Z
null
true
668,037,965
https://api.github.com/repos/huggingface/datasets/issues/455
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/455/events
[]
null
2020-07-31T13:56:14Z
[]
https://github.com/huggingface/datasets/pull/455
MEMBER
null
false
null
[]
Add bleurt
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/455/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU4NTk4NTUw
{ "diff_url": "https://github.com/huggingface/datasets/pull/455.diff", "html_url": "https://github.com/huggingface/datasets/pull/455", "merged_at": "2020-07-31T13:56:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/455.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/455" }
2020-07-29T18:08:32Z
https://api.github.com/repos/huggingface/datasets/issues/455/comments
This PR adds the BLEURT metric to the library. The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`. Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend using `bleurt-base-128` instead. I think it's safer to have our users have a functioning metric when they call the default behavior, we'll address discrepancies in the issues/discussions if it comes up. In addition to the BLEURT file, `load.py` was changed so we can ask users to pip install the required packages from git when they have a `setup.py` but are not on PyPL cc @ankparikh @tsellam
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
https://api.github.com/repos/huggingface/datasets/issues/455/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/455/timeline
closed
false
455
null
2020-07-31T13:56:14Z
null
true
668,011,577
https://api.github.com/repos/huggingface/datasets/issues/454
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/454/events
[]
null
2020-07-29T21:45:52Z
[]
https://github.com/huggingface/datasets/pull/454
NONE
null
false
null
[]
Create SECURITY.md
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/454/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU4NTc3MzA3
{ "diff_url": "https://github.com/huggingface/datasets/pull/454.diff", "html_url": "https://github.com/huggingface/datasets/pull/454", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/454.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/454" }
2020-07-29T17:23:34Z
https://api.github.com/repos/huggingface/datasets/issues/454/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/56394989?v=4", "events_url": "https://api.github.com/users/ChenZehong13/events{/privacy}", "followers_url": "https://api.github.com/users/ChenZehong13/followers", "following_url": "https://api.github.com/users/ChenZehong13/following{/other_user}", "gists_url": "https://api.github.com/users/ChenZehong13/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ChenZehong13", "id": 56394989, "login": "ChenZehong13", "node_id": "MDQ6VXNlcjU2Mzk0OTg5", "organizations_url": "https://api.github.com/users/ChenZehong13/orgs", "received_events_url": "https://api.github.com/users/ChenZehong13/received_events", "repos_url": "https://api.github.com/users/ChenZehong13/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ChenZehong13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ChenZehong13/subscriptions", "type": "User", "url": "https://api.github.com/users/ChenZehong13" }
https://api.github.com/repos/huggingface/datasets/issues/454/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/454/timeline
closed
false
454
null
2020-07-29T21:45:52Z
null
true
667,728,247
https://api.github.com/repos/huggingface/datasets/issues/453
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/453/events
[]
null
2020-07-29T11:14:06Z
[]
https://github.com/huggingface/datasets/pull/453
MEMBER
null
false
null
[]
add builder tests
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/453/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU4MzQwNzky
{ "diff_url": "https://github.com/huggingface/datasets/pull/453.diff", "html_url": "https://github.com/huggingface/datasets/pull/453", "merged_at": "2020-07-29T11:14:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/453.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/453" }
2020-07-29T10:22:07Z
https://api.github.com/repos/huggingface/datasets/issues/453/comments
I added `as_dataset` and `download_and_prepare` to the tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/453/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/453/timeline
closed
false
453
null
2020-07-29T11:14:05Z
null
true
667,498,295
https://api.github.com/repos/huggingface/datasets/issues/452
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/452/events
[]
null
2020-08-20T15:09:57Z
[]
https://github.com/huggingface/datasets/pull/452
CONTRIBUTOR
null
false
null
[]
Guardian authorship dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/452/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU4MTUzNjQy
{ "diff_url": "https://github.com/huggingface/datasets/pull/452.diff", "html_url": "https://github.com/huggingface/datasets/pull/452", "merged_at": "2020-08-20T15:07:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/452.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/452" }
2020-07-29T02:23:57Z
https://api.github.com/repos/huggingface/datasets/issues/452/comments
A new dataset: Guardian news articles for authorship attribution **tests passed:** python nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_guardian_authorship **Tests failed:** Real data: RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_guardian_authorship output: __init__() missing 3 required positional arguments: 'train_folder', 'valid_folder', and 'tes...' Remarks: This is the init function of my class. I am not sure why it passes in both my tests and with nlp-cli, but fails here. By the way, I ran this command with another 2 datasets and they failed: * _glue - OSError: Cannot find data file. *_newsgroup - FileNotFoundError: Local file datasets/newsgroup/dummy/18828_comp.graphics/3.0.0/dummy_data.zip doesn't exist Thank you for letting us contribute to such a huge and important library! EDIT: I was able to fix the dummy_data issue. This dataset has around 14 configurations. I was testing with only 2, but their versions were not in a sequence, they were V1.0.0 and V.12.0.0. It seems that the testing code generates testes for all the versions from 0 to MAX, and was testing for versions (and dummy_data.zip files) that do not exist. I fixed that by changing the versions to 1 and 2.
{ "avatar_url": "https://avatars.githubusercontent.com/u/25109412?v=4", "events_url": "https://api.github.com/users/malikaltakrori/events{/privacy}", "followers_url": "https://api.github.com/users/malikaltakrori/followers", "following_url": "https://api.github.com/users/malikaltakrori/following{/other_user}", "gists_url": "https://api.github.com/users/malikaltakrori/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/malikaltakrori", "id": 25109412, "login": "malikaltakrori", "node_id": "MDQ6VXNlcjI1MTA5NDEy", "organizations_url": "https://api.github.com/users/malikaltakrori/orgs", "received_events_url": "https://api.github.com/users/malikaltakrori/received_events", "repos_url": "https://api.github.com/users/malikaltakrori/repos", "site_admin": false, "starred_url": "https://api.github.com/users/malikaltakrori/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/malikaltakrori/subscriptions", "type": "User", "url": "https://api.github.com/users/malikaltakrori" }
https://api.github.com/repos/huggingface/datasets/issues/452/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/452/timeline
closed
false
452
null
2020-08-20T15:07:56Z
null
true
667,210,468
https://api.github.com/repos/huggingface/datasets/issues/451
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/451/events
[]
null
2020-07-29T13:57:23Z
[]
https://github.com/huggingface/datasets/pull/451
MEMBER
null
false
null
[]
Fix csv/json/txt cache dir
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/451/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU3OTIxNDMx
{ "diff_url": "https://github.com/huggingface/datasets/pull/451.diff", "html_url": "https://github.com/huggingface/datasets/pull/451", "merged_at": "2020-07-29T13:57:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/451.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/451" }
2020-07-28T16:30:51Z
https://api.github.com/repos/huggingface/datasets/issues/451/comments
The cache dir for csv/json/txt datasets was always the same. This is an issue because it should be different depending on the data files provided by the user. To fix that, I added a line that use the hash of the data files provided by the user to define the cache dir. This should fix #444
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/451/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/451/timeline
closed
false
451
null
2020-07-29T13:57:22Z
null
true
667,074,120
https://api.github.com/repos/huggingface/datasets/issues/450
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/450/events
[]
null
2020-07-29T13:30:18Z
[]
https://github.com/huggingface/datasets/pull/450
CONTRIBUTOR
null
false
null
[]
add sogou_news
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/450/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU3ODA5ODA2
{ "diff_url": "https://github.com/huggingface/datasets/pull/450.diff", "html_url": "https://github.com/huggingface/datasets/pull/450", "merged_at": "2020-07-29T13:30:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/450.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/450" }
2020-07-28T13:29:10Z
https://api.github.com/repos/huggingface/datasets/issues/450/comments
This PR adds the sogou news dataset #353
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
https://api.github.com/repos/huggingface/datasets/issues/450/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/450/timeline
closed
false
450
null
2020-07-29T13:30:17Z
null
true
666,898,923
https://api.github.com/repos/huggingface/datasets/issues/449
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/449/events
[]
null
2023-09-24T09:49:28Z
[]
https://github.com/huggingface/datasets/pull/449
CONTRIBUTOR
null
false
null
[]
add reuters21578 dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/449/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU3NjY0NjYx
{ "diff_url": "https://github.com/huggingface/datasets/pull/449.diff", "html_url": "https://github.com/huggingface/datasets/pull/449", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/449.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/449" }
2020-07-28T08:58:12Z
https://api.github.com/repos/huggingface/datasets/issues/449/comments
This PR adds the `Reuters_21578` dataset https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html #353 The datasets is a lit of `.sgm` files which are a bit different from xml file indeed `xml.etree` couldn't be used to read files. I consider them as text file (to avoid using external library) and read line by line (maybe there is a better way to do, happy to get your opinion on it) In the Readme file 3 ways to split the dataset are given.: - The Modified Lewis ("ModLewis") Split: train, test and unused-set - The Modified Apte ("ModApte") Split : train, test and unused-set - The Modified Hayes ("ModHayes") Split: train and test Here I consider the last one as the readme file highlight that this split provides the ability to compare results with those of the 2 first splits.
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
https://api.github.com/repos/huggingface/datasets/issues/449/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/449/timeline
closed
false
449
null
2020-08-03T11:10:31Z
null
true
666,893,443
https://api.github.com/repos/huggingface/datasets/issues/448
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/448/events
[]
null
2020-07-28T15:02:27Z
[]
https://github.com/huggingface/datasets/pull/448
CONTRIBUTOR
null
false
null
[]
add aws load metric test
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/448/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU3NjYwMDU2
{ "diff_url": "https://github.com/huggingface/datasets/pull/448.diff", "html_url": "https://github.com/huggingface/datasets/pull/448", "merged_at": "2020-07-28T15:02:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/448.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/448" }
2020-07-28T08:50:22Z
https://api.github.com/repos/huggingface/datasets/issues/448/comments
Following issue #445 Added a test to recognize import errors of all metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4", "events_url": "https://api.github.com/users/idoh/events{/privacy}", "followers_url": "https://api.github.com/users/idoh/followers", "following_url": "https://api.github.com/users/idoh/following{/other_user}", "gists_url": "https://api.github.com/users/idoh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/idoh", "id": 5303103, "login": "idoh", "node_id": "MDQ6VXNlcjUzMDMxMDM=", "organizations_url": "https://api.github.com/users/idoh/orgs", "received_events_url": "https://api.github.com/users/idoh/received_events", "repos_url": "https://api.github.com/users/idoh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/idoh/subscriptions", "type": "User", "url": "https://api.github.com/users/idoh" }
https://api.github.com/repos/huggingface/datasets/issues/448/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/448/timeline
closed
false
448
null
2020-07-28T15:02:27Z
null
true
666,842,115
https://api.github.com/repos/huggingface/datasets/issues/447
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/447/events
[]
null
2020-07-28T12:58:01Z
[]
https://github.com/huggingface/datasets/pull/447
CONTRIBUTOR
null
false
null
[]
[BugFix] fix wrong import of DEFAULT_TOKENIZER
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/447/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU3NjE2NDA0
{ "diff_url": "https://github.com/huggingface/datasets/pull/447.diff", "html_url": "https://github.com/huggingface/datasets/pull/447", "merged_at": "2020-07-28T12:52:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/447.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/447" }
2020-07-28T07:41:10Z
https://api.github.com/repos/huggingface/datasets/issues/447/comments
Fixed the path to `DEFAULT_TOKENIZER` #445
{ "avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4", "events_url": "https://api.github.com/users/idoh/events{/privacy}", "followers_url": "https://api.github.com/users/idoh/followers", "following_url": "https://api.github.com/users/idoh/following{/other_user}", "gists_url": "https://api.github.com/users/idoh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/idoh", "id": 5303103, "login": "idoh", "node_id": "MDQ6VXNlcjUzMDMxMDM=", "organizations_url": "https://api.github.com/users/idoh/orgs", "received_events_url": "https://api.github.com/users/idoh/received_events", "repos_url": "https://api.github.com/users/idoh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/idoh/subscriptions", "type": "User", "url": "https://api.github.com/users/idoh" }
https://api.github.com/repos/huggingface/datasets/issues/447/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/447/timeline
closed
false
447
null
2020-07-28T12:52:05Z
null
true
666,837,351
https://api.github.com/repos/huggingface/datasets/issues/446
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/446/events
[]
null
2020-07-28T07:34:46Z
[]
https://github.com/huggingface/datasets/pull/446
CONTRIBUTOR
null
false
null
[]
[BugFix] fix wrong import of DEFAULT_TOKENIZER
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/446/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU3NjEyNTg5
{ "diff_url": "https://github.com/huggingface/datasets/pull/446.diff", "html_url": "https://github.com/huggingface/datasets/pull/446", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/446.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/446" }
2020-07-28T07:32:47Z
https://api.github.com/repos/huggingface/datasets/issues/446/comments
Fixed the path to `DEFAULT_TOKENIZER` #445
{ "avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4", "events_url": "https://api.github.com/users/idoh/events{/privacy}", "followers_url": "https://api.github.com/users/idoh/followers", "following_url": "https://api.github.com/users/idoh/following{/other_user}", "gists_url": "https://api.github.com/users/idoh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/idoh", "id": 5303103, "login": "idoh", "node_id": "MDQ6VXNlcjUzMDMxMDM=", "organizations_url": "https://api.github.com/users/idoh/orgs", "received_events_url": "https://api.github.com/users/idoh/received_events", "repos_url": "https://api.github.com/users/idoh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/idoh/subscriptions", "type": "User", "url": "https://api.github.com/users/idoh" }
https://api.github.com/repos/huggingface/datasets/issues/446/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/446/timeline
closed
false
446
null
2020-07-28T07:33:59Z
null
true
666,836,658
https://api.github.com/repos/huggingface/datasets/issues/445
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/445/events
[]
null
2020-07-28T12:58:56Z
[]
https://github.com/huggingface/datasets/issues/445
CONTRIBUTOR
completed
null
null
[]
DEFAULT_TOKENIZER import error in sacrebleu
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/445/reactions" }
MDU6SXNzdWU2NjY4MzY2NTg=
null
2020-07-28T07:31:30Z
https://api.github.com/repos/huggingface/datasets/issues/445/comments
Latest Version 0.3.0 When loading the metric "sacrebleu" there is an import error due to the wrong path ![image](https://user-images.githubusercontent.com/5303103/88633063-2c5e5f00-d0bd-11ea-8ca8-4704dc975433.png)
{ "avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4", "events_url": "https://api.github.com/users/idoh/events{/privacy}", "followers_url": "https://api.github.com/users/idoh/followers", "following_url": "https://api.github.com/users/idoh/following{/other_user}", "gists_url": "https://api.github.com/users/idoh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/idoh", "id": 5303103, "login": "idoh", "node_id": "MDQ6VXNlcjUzMDMxMDM=", "organizations_url": "https://api.github.com/users/idoh/orgs", "received_events_url": "https://api.github.com/users/idoh/received_events", "repos_url": "https://api.github.com/users/idoh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/idoh/subscriptions", "type": "User", "url": "https://api.github.com/users/idoh" }
https://api.github.com/repos/huggingface/datasets/issues/445/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/445/timeline
closed
false
445
null
2020-07-28T12:58:56Z
null
false
666,280,842
https://api.github.com/repos/huggingface/datasets/issues/444
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/444/events
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
null
2020-07-29T13:57:22Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
https://github.com/huggingface/datasets/issues/444
NONE
completed
null
null
[]
Keep loading old file even I specify a new file in load_dataset
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/444/reactions" }
MDU6SXNzdWU2NjYyODA4NDI=
null
2020-07-27T13:08:06Z
https://api.github.com/repos/huggingface/datasets/issues/444/comments
I used load a file called 'a.csv' by ``` dataset = load_dataset('csv', data_file='./a.csv') ``` And after a while, I tried to load another csv called 'b.csv' ``` dataset = load_dataset('csv', data_file='./b.csv') ``` However, the new dataset seems to remain the old 'a.csv' and not loading new csv file. Even worse, after I load a.csv, the load_dataset function keeps loading the 'a.csv' afterward. Is this a cache problem?
{ "avatar_url": "https://avatars.githubusercontent.com/u/10594453?v=4", "events_url": "https://api.github.com/users/joshhu/events{/privacy}", "followers_url": "https://api.github.com/users/joshhu/followers", "following_url": "https://api.github.com/users/joshhu/following{/other_user}", "gists_url": "https://api.github.com/users/joshhu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/joshhu", "id": 10594453, "login": "joshhu", "node_id": "MDQ6VXNlcjEwNTk0NDUz", "organizations_url": "https://api.github.com/users/joshhu/orgs", "received_events_url": "https://api.github.com/users/joshhu/received_events", "repos_url": "https://api.github.com/users/joshhu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/joshhu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joshhu/subscriptions", "type": "User", "url": "https://api.github.com/users/joshhu" }
https://api.github.com/repos/huggingface/datasets/issues/444/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/444/timeline
closed
false
444
null
2020-07-29T13:57:22Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
false
666,246,716
https://api.github.com/repos/huggingface/datasets/issues/443
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/443/events
[]
null
2020-07-27T13:05:11Z
[]
https://github.com/huggingface/datasets/issues/443
CONTRIBUTOR
completed
null
null
[]
Cannot unpickle saved .pt dataset with torch.save()/load()
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/443/reactions" }
MDU6SXNzdWU2NjYyNDY3MTY=
null
2020-07-27T12:13:37Z
https://api.github.com/repos/huggingface/datasets/issues/443/comments
Saving a formatted torch dataset to file using `torch.save()`. Loading the same file fails during unpickling: ```python >>> import torch >>> import nlp >>> squad = nlp.load_dataset("squad.py", split="train") >>> squad Dataset(features: {'source_text': Value(dtype='string', id=None), 'target_text': Value(dtype='string', id=None)}, num_rows: 87599) >>> squad = squad.map(create_features, batched=True) >>> squad.set_format(type="torch", columns=["source_ids", "target_ids", "attention_mask"]) >>> torch.save(squad, "squad.pt") >>> squad_pt = torch.load("squad.pt") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/torch/serialization.py", line 593, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/torch/serialization.py", line 773, in _legacy_load result = unpickler.load() File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/splits.py", line 493, in __setitem__ raise ValueError("Cannot add elem. Use .add() instead.") ValueError: Cannot add elem. Use .add() instead. ``` where `create_features` is a function that tokenizes the data using `batch_encode_plus` and returns a Dict with `input_ids`, `target_ids` and `attention_mask`. ```python def create_features(batch): source_text_encoding = tokenizer.batch_encode_plus( batch["source_text"], max_length=max_source_length, pad_to_max_length=True, truncation=True) target_text_encoding = tokenizer.batch_encode_plus( batch["target_text"], max_length=max_target_length, pad_to_max_length=True, truncation=True) features = { "source_ids": source_text_encoding["input_ids"], "target_ids": target_text_encoding["input_ids"], "attention_mask": source_text_encoding["attention_mask"] } return features ``` I found a similar issue in [issue 5267 in the huggingface/transformers repo](https://github.com/huggingface/transformers/issues/5267) which was solved by downgrading to `nlp==0.2.0`. That did not solve this problem, however.
{ "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "events_url": "https://api.github.com/users/vegarab/events{/privacy}", "followers_url": "https://api.github.com/users/vegarab/followers", "following_url": "https://api.github.com/users/vegarab/following{/other_user}", "gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vegarab", "id": 24683907, "login": "vegarab", "node_id": "MDQ6VXNlcjI0NjgzOTA3", "organizations_url": "https://api.github.com/users/vegarab/orgs", "received_events_url": "https://api.github.com/users/vegarab/received_events", "repos_url": "https://api.github.com/users/vegarab/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vegarab/subscriptions", "type": "User", "url": "https://api.github.com/users/vegarab" }
https://api.github.com/repos/huggingface/datasets/issues/443/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/443/timeline
closed
false
443
null
2020-07-27T13:05:11Z
null
false
666,201,810
https://api.github.com/repos/huggingface/datasets/issues/442
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/442/events
[ { "color": "72f99f", "default": false, "description": "Discussions on the datasets", "id": 2067401494, "name": "Dataset discussion", "node_id": "MDU6TGFiZWwyMDY3NDAxNDk0", "url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion" } ]
null
2020-08-24T15:13:20Z
[]
https://github.com/huggingface/datasets/issues/442
NONE
null
null
null
[]
[Suggestion] Glue Diagnostic Data with Labels
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/442/reactions" }
MDU6SXNzdWU2NjYyMDE4MTA=
null
2020-07-27T10:59:58Z
https://api.github.com/repos/huggingface/datasets/issues/442/comments
Hello! First of all, thanks for setting up this useful project! I've just realised you provide the the [Glue Diagnostics Data](https://huggingface.co/nlp/viewer/?dataset=glue&config=ax) without labels, indicating in the `GlueConfig` that you've only a test set. Yet, the data with labels is available, too (see also [here](https://gluebenchmark.com/diagnostics#introduction)): https://www.dropbox.com/s/ju7d95ifb072q9f/diagnostic-full.tsv?dl=1 Have you considered incorporating it?
{ "avatar_url": "https://avatars.githubusercontent.com/u/3662782?v=4", "events_url": "https://api.github.com/users/ggbetz/events{/privacy}", "followers_url": "https://api.github.com/users/ggbetz/followers", "following_url": "https://api.github.com/users/ggbetz/following{/other_user}", "gists_url": "https://api.github.com/users/ggbetz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ggbetz", "id": 3662782, "login": "ggbetz", "node_id": "MDQ6VXNlcjM2NjI3ODI=", "organizations_url": "https://api.github.com/users/ggbetz/orgs", "received_events_url": "https://api.github.com/users/ggbetz/received_events", "repos_url": "https://api.github.com/users/ggbetz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ggbetz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ggbetz/subscriptions", "type": "User", "url": "https://api.github.com/users/ggbetz" }
https://api.github.com/repos/huggingface/datasets/issues/442/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/442/timeline
open
false
442
null
null
null
false
666,148,413
https://api.github.com/repos/huggingface/datasets/issues/441
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/441/events
[]
null
2020-07-30T12:51:17Z
[]
https://github.com/huggingface/datasets/pull/441
MEMBER
null
false
null
[]
Add features parameter in load dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/441/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU3MDQyMjY3
{ "diff_url": "https://github.com/huggingface/datasets/pull/441.diff", "html_url": "https://github.com/huggingface/datasets/pull/441", "merged_at": "2020-07-30T12:51:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/441.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/441" }
2020-07-27T09:50:01Z
https://api.github.com/repos/huggingface/datasets/issues/441/comments
Added `features` argument in `nlp.load_dataset`. If they don't match the data type, it raises a `ValueError`. It's a draft PR because #440 needs to be merged first.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/441/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/441/timeline
closed
false
441
null
2020-07-30T12:51:16Z
null
true
666,116,823
https://api.github.com/repos/huggingface/datasets/issues/440
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/440/events
[]
null
2020-07-28T09:25:23Z
[]
https://github.com/huggingface/datasets/pull/440
MEMBER
null
false
null
[]
Fix user specified features in map
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/440/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU3MDE2MjQy
{ "diff_url": "https://github.com/huggingface/datasets/pull/440.diff", "html_url": "https://github.com/huggingface/datasets/pull/440", "merged_at": "2020-07-28T09:25:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/440.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/440" }
2020-07-27T09:04:26Z
https://api.github.com/repos/huggingface/datasets/issues/440/comments
`.map` didn't keep the user specified features because of an issue in the writer. The writer used to overwrite the user specified features with inferred features. I also added tests to make sure it doesn't happen again.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/440/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/440/timeline
closed
false
440
null
2020-07-28T09:25:22Z
null
true
665,964,673
https://api.github.com/repos/huggingface/datasets/issues/439
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/439/events
[]
null
2020-10-28T01:46:24Z
[]
https://github.com/huggingface/datasets/issues/439
NONE
completed
null
null
[]
Issues: Adding a FAISS or Elastic Search index to a Dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/439/reactions" }
MDU6SXNzdWU2NjU5NjQ2NzM=
null
2020-07-27T04:25:17Z
https://api.github.com/repos/huggingface/datasets/issues/439/comments
It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nlp install from github in Colab. Is there any dependency on the latest PyArrow 1.0.0 ? Is it yet to be made generally available ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4", "events_url": "https://api.github.com/users/nsankar/events{/privacy}", "followers_url": "https://api.github.com/users/nsankar/followers", "following_url": "https://api.github.com/users/nsankar/following{/other_user}", "gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nsankar", "id": 431890, "login": "nsankar", "node_id": "MDQ6VXNlcjQzMTg5MA==", "organizations_url": "https://api.github.com/users/nsankar/orgs", "received_events_url": "https://api.github.com/users/nsankar/received_events", "repos_url": "https://api.github.com/users/nsankar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nsankar/subscriptions", "type": "User", "url": "https://api.github.com/users/nsankar" }
https://api.github.com/repos/huggingface/datasets/issues/439/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/439/timeline
closed
false
439
null
2020-10-28T01:46:24Z
null
false
665,865,490
https://api.github.com/repos/huggingface/datasets/issues/438
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/438/events
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
null
2020-08-24T15:12:15Z
[]
https://github.com/huggingface/datasets/issues/438
CONTRIBUTOR
null
null
null
[]
New Datasets: IWSLT15+, ITTB
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/438/reactions" }
MDU6SXNzdWU2NjU4NjU0OTA=
null
2020-07-26T21:43:04Z
https://api.github.com/repos/huggingface/datasets/issues/438/comments
**Links:** [iwslt](https://pytorchnlp.readthedocs.io/en/latest/_modules/torchnlp/datasets/iwslt.html) Don't know if that link is up to date. [ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/) **Motivation**: replicate mbart finetuning results (table below) ![image](https://user-images.githubusercontent.com/6045025/88490093-0c1c8c00-cf67-11ea-960d-8dcaad2aa8eb.png) For future readers, we already have the following language pairs in the wmt namespaces: ``` wmt14: ['cs-en', 'de-en', 'fr-en', 'hi-en', 'ru-en'] wmt15: ['cs-en', 'de-en', 'fi-en', 'fr-en', 'ru-en'] wmt16: ['cs-en', 'de-en', 'fi-en', 'ro-en', 'ru-en', 'tr-en'] wmt17: ['cs-en', 'de-en', 'fi-en', 'lv-en', 'ru-en', 'tr-en', 'zh-en'] wmt18: ['cs-en', 'de-en', 'et-en', 'fi-en', 'kk-en', 'ru-en', 'tr-en', 'zh-en'] wmt19: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de'] ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sshleifer", "id": 6045025, "login": "sshleifer", "node_id": "MDQ6VXNlcjYwNDUwMjU=", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "repos_url": "https://api.github.com/users/sshleifer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "type": "User", "url": "https://api.github.com/users/sshleifer" }
https://api.github.com/repos/huggingface/datasets/issues/438/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/438/timeline
open
false
438
null
null
null
false
665,597,176
https://api.github.com/repos/huggingface/datasets/issues/437
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/437/events
[]
null
2020-07-30T08:28:15Z
[]
https://github.com/huggingface/datasets/pull/437
MEMBER
null
false
null
[]
Fix XTREME PAN-X loading
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/437/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU2NjIzNjc3
{ "diff_url": "https://github.com/huggingface/datasets/pull/437.diff", "html_url": "https://github.com/huggingface/datasets/pull/437", "merged_at": "2020-07-30T08:28:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/437.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/437" }
2020-07-25T14:44:57Z
https://api.github.com/repos/huggingface/datasets/issues/437/comments
Hi 🤗 In response to the discussion in #425 @lewtun and I made some fixes to the repo. In the original XTREME implementation the PAN-X dataset for named entity recognition loaded each word/tag pair as a single row and the sentence relation was lost. With the fix each row contains the list of all words in a single sentence and their NER tags. This is also in agreement with the [NER example](https://github.com/huggingface/transformers/tree/master/examples/token-classification) in the transformers repo. With the fix the output of the dataset should look as follows: ```python >>> dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') >>> dataset['train'][0] {'words': ['R.H.', 'Saunders', '(', 'St.', 'Lawrence', 'River', ')', '(', '968', 'MW', ')'], 'ner_tags': ['B-ORG', 'I-ORG', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O'], 'langs': ['en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en']} ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lvwerra", "id": 8264887, "login": "lvwerra", "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "repos_url": "https://api.github.com/users/lvwerra/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "type": "User", "url": "https://api.github.com/users/lvwerra" }
https://api.github.com/repos/huggingface/datasets/issues/437/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/437/timeline
closed
false
437
null
2020-07-30T08:28:15Z
null
true
665,582,167
https://api.github.com/repos/huggingface/datasets/issues/436
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/436/events
[]
null
2020-08-20T08:08:18Z
[]
https://github.com/huggingface/datasets/issues/436
NONE
completed
null
null
[]
Google Colab - load_dataset - PyArrow exception
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/436/reactions" }
MDU6SXNzdWU2NjU1ODIxNjc=
null
2020-07-25T13:05:20Z
https://api.github.com/repos/huggingface/datasets/issues/436/comments
With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just restart the runtime to use the right version of `pyarrow`. The error goes only when I install version 0.16.0 i.e. !pip install pyarrow==0.16.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4", "events_url": "https://api.github.com/users/nsankar/events{/privacy}", "followers_url": "https://api.github.com/users/nsankar/followers", "following_url": "https://api.github.com/users/nsankar/following{/other_user}", "gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nsankar", "id": 431890, "login": "nsankar", "node_id": "MDQ6VXNlcjQzMTg5MA==", "organizations_url": "https://api.github.com/users/nsankar/orgs", "received_events_url": "https://api.github.com/users/nsankar/received_events", "repos_url": "https://api.github.com/users/nsankar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nsankar/subscriptions", "type": "User", "url": "https://api.github.com/users/nsankar" }
https://api.github.com/repos/huggingface/datasets/issues/436/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/436/timeline
closed
false
436
null
2020-08-20T08:08:18Z
null
false
665,507,141
https://api.github.com/repos/huggingface/datasets/issues/435
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/435/events
[]
null
2020-09-08T17:57:15Z
[]
https://github.com/huggingface/datasets/issues/435
NONE
completed
null
null
[]
ImportWarning for pyarrow 1.0.0
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/435/reactions" }
MDU6SXNzdWU2NjU1MDcxNDE=
null
2020-07-25T03:44:39Z
https://api.github.com/repos/huggingface/datasets/issues/435/comments
The following PR raised ImportWarning at `pyarrow ==1.0.0` https://github.com/huggingface/nlp/pull/265/files
{ "avatar_url": "https://avatars.githubusercontent.com/u/18187806?v=4", "events_url": "https://api.github.com/users/HanGuo97/events{/privacy}", "followers_url": "https://api.github.com/users/HanGuo97/followers", "following_url": "https://api.github.com/users/HanGuo97/following{/other_user}", "gists_url": "https://api.github.com/users/HanGuo97/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HanGuo97", "id": 18187806, "login": "HanGuo97", "node_id": "MDQ6VXNlcjE4MTg3ODA2", "organizations_url": "https://api.github.com/users/HanGuo97/orgs", "received_events_url": "https://api.github.com/users/HanGuo97/received_events", "repos_url": "https://api.github.com/users/HanGuo97/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HanGuo97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HanGuo97/subscriptions", "type": "User", "url": "https://api.github.com/users/HanGuo97" }
https://api.github.com/repos/huggingface/datasets/issues/435/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/435/timeline
closed
false
435
null
2020-08-03T16:37:32Z
null
false
665,477,638
https://api.github.com/repos/huggingface/datasets/issues/434
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/434/events
[]
null
2020-07-25T06:36:34Z
[]
https://github.com/huggingface/datasets/pull/434
CONTRIBUTOR
null
false
null
[]
Fixed check for pyarrow
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/434/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU2NTM3Njgz
{ "diff_url": "https://github.com/huggingface/datasets/pull/434.diff", "html_url": "https://github.com/huggingface/datasets/pull/434", "merged_at": "2020-07-25T06:36:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/434.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/434" }
2020-07-25T00:16:53Z
https://api.github.com/repos/huggingface/datasets/issues/434/comments
Fix check for pyarrow in __init__.py. Previously would raise an error for pyarrow >= 1.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/58701810?v=4", "events_url": "https://api.github.com/users/nadahlberg/events{/privacy}", "followers_url": "https://api.github.com/users/nadahlberg/followers", "following_url": "https://api.github.com/users/nadahlberg/following{/other_user}", "gists_url": "https://api.github.com/users/nadahlberg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nadahlberg", "id": 58701810, "login": "nadahlberg", "node_id": "MDQ6VXNlcjU4NzAxODEw", "organizations_url": "https://api.github.com/users/nadahlberg/orgs", "received_events_url": "https://api.github.com/users/nadahlberg/received_events", "repos_url": "https://api.github.com/users/nadahlberg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nadahlberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nadahlberg/subscriptions", "type": "User", "url": "https://api.github.com/users/nadahlberg" }
https://api.github.com/repos/huggingface/datasets/issues/434/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/434/timeline
closed
false
434
null
2020-07-25T06:36:34Z
null
true
665,311,025
https://api.github.com/repos/huggingface/datasets/issues/433
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/433/events
[]
null
2022-10-04T17:59:34Z
[]
https://github.com/huggingface/datasets/issues/433
NONE
completed
null
null
[]
How to reuse functionality of a (generic) dataset?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/433/reactions" }
MDU6SXNzdWU2NjUzMTEwMjU=
null
2020-07-24T17:27:37Z
https://api.github.com/repos/huggingface/datasets/issues/433/comments
I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to reuse formats and loading functionality for datasets with a common format? In my case, it took a bit of time to create the Brat dataset and I think others would appreciate to not have to think about that again. Also, I assume there are other formats (e.g. conll) that are widely used, so having this would really ease dataset onboarding and adoption of the library.
{ "avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4", "events_url": "https://api.github.com/users/ArneBinder/events{/privacy}", "followers_url": "https://api.github.com/users/ArneBinder/followers", "following_url": "https://api.github.com/users/ArneBinder/following{/other_user}", "gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArneBinder", "id": 3375489, "login": "ArneBinder", "node_id": "MDQ6VXNlcjMzNzU0ODk=", "organizations_url": "https://api.github.com/users/ArneBinder/orgs", "received_events_url": "https://api.github.com/users/ArneBinder/received_events", "repos_url": "https://api.github.com/users/ArneBinder/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions", "type": "User", "url": "https://api.github.com/users/ArneBinder" }
https://api.github.com/repos/huggingface/datasets/issues/433/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/433/timeline
closed
false
433
null
2022-10-04T17:59:33Z
null
false
665,234,340
https://api.github.com/repos/huggingface/datasets/issues/432
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/432/events
[]
null
2020-08-01T17:11:42Z
[]
https://github.com/huggingface/datasets/pull/432
CONTRIBUTOR
null
false
null
[]
Fix handling of config files while loading datasets from multiple processes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/432/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU2MzQxNDk3
{ "diff_url": "https://github.com/huggingface/datasets/pull/432.diff", "html_url": "https://github.com/huggingface/datasets/pull/432", "merged_at": "2020-07-30T08:25:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/432.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/432" }
2020-07-24T15:10:57Z
https://api.github.com/repos/huggingface/datasets/issues/432/comments
When loading shards on several processes, each process upon loading the dataset will overwrite dataset_infos.json in <package path>/datasets/<dataset name>/<hash>/dataset_infos.json. It does so every time, even when the target file already exists and is identical. Because multiple processes rewrite the same file in parallel, it creates a race condition when a process tries to load the file, often resulting in a JSON decoding exception because the file is only partially written. This pull requests partially address this by comparing if the files are already identical before copying over the downloaded copy to the cached destination. There's still a race condition, but now it's less likely to occur if some basic precautions are taken by the library user, e.g., download all datasets to cache before spawning multiple processes.
{ "avatar_url": "https://avatars.githubusercontent.com/u/99543?v=4", "events_url": "https://api.github.com/users/orsharir/events{/privacy}", "followers_url": "https://api.github.com/users/orsharir/followers", "following_url": "https://api.github.com/users/orsharir/following{/other_user}", "gists_url": "https://api.github.com/users/orsharir/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/orsharir", "id": 99543, "login": "orsharir", "node_id": "MDQ6VXNlcjk5NTQz", "organizations_url": "https://api.github.com/users/orsharir/orgs", "received_events_url": "https://api.github.com/users/orsharir/received_events", "repos_url": "https://api.github.com/users/orsharir/repos", "site_admin": false, "starred_url": "https://api.github.com/users/orsharir/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orsharir/subscriptions", "type": "User", "url": "https://api.github.com/users/orsharir" }
https://api.github.com/repos/huggingface/datasets/issues/432/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/432/timeline
closed
false
432
null
2020-07-30T08:25:28Z
null
true
665,044,416
https://api.github.com/repos/huggingface/datasets/issues/431
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/431/events
[]
null
2020-07-31T09:05:04Z
[]
https://github.com/huggingface/datasets/pull/431
MEMBER
null
false
null
[]
Specify split post processing + Add post processing resources downloading
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/431/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU2MTgyNDE2
{ "diff_url": "https://github.com/huggingface/datasets/pull/431.diff", "html_url": "https://github.com/huggingface/datasets/pull/431", "merged_at": "2020-07-31T09:05:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/431.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/431" }
2020-07-24T09:29:19Z
https://api.github.com/repos/huggingface/datasets/issues/431/comments
Previously if you tried to do ```python from nlp import load_dataset wiki = load_dataset("wiki_dpr", "psgs_w100_with_nq_embeddings", split="train[:100]", with_index=True) ``` Then you'd get an error `Index size should match Dataset size...` This was because it was trying to use the full index (21M elements). To fix that I made it so post processing resources can be named according to the split. I'm going to add tests on post processing too. Note that the CI will fail as I added a new argument in `_post_processing_resources`: the AWS version of wiki_dpr fails, and there's also an error telling that it is not synced (it'll be synced once it's merged): ``` =========================== short test summary info ============================ FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr FAILED tests/test_hf_gcp.py::TestDatasetSynced::test_script_synced_with_s3_wiki_dpr ``` EDIT: I did a change to ignore the script hash to locate the arrow files on GCS, so I removed the sync test. It was there just because of the hash logic for files on GCS
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/431/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/431/timeline
closed
false
431
null
2020-07-31T09:05:03Z
null
true
664,583,837
https://api.github.com/repos/huggingface/datasets/issues/430
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/430/events
[]
null
2020-08-04T01:01:53Z
[]
https://github.com/huggingface/datasets/pull/430
MEMBER
null
false
null
[]
add DatasetDict
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/430/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU1ODAxOTI2
{ "diff_url": "https://github.com/huggingface/datasets/pull/430.diff", "html_url": "https://github.com/huggingface/datasets/pull/430", "merged_at": "2020-07-29T09:06:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/430.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/430" }
2020-07-23T15:43:49Z
https://api.github.com/repos/huggingface/datasets/issues/430/comments
## Add DatasetDict ### Overview When you call `load_dataset` it can return a dictionary of datasets if there are several splits (train/test for example). If you wanted to apply dataset transforms you had to iterate over each split and apply the transform. Instead of returning a dict, it now returns a `nlp.DatasetDict` object which inherits from dict and contains the same data as before, except that now users can call dataset transforms directly from the output, and they'll be applied on each split. Before: ```python from nlp import load_dataset squad = load_dataset("squad") print(squad.keys()) # dict_keys(['train', 'validation']) squad = { split_name: dataset.map(my_func) for split_name, dataset in squad.items() } print(squad.keys()) # dict_keys(['train', 'validation']) ``` Now: ```python from nlp import load_dataset squad = load_dataset("squad") print(squad.keys()) # dict_keys(['train', 'validation']) squad = squad.map(my_func) print(squad.keys()) # dict_keys(['train', 'validation']) ``` ### Dataset transforms `nlp.DatasetDict` implements the following dataset transforms: - map - filter - sort - shuffle ### Arguments The arguments of the methods are the same except for split-specific arguments like `cache_file_name`. For such arguments, the expected input is a dictionary `{split_name: argument_value}` It concerns: - `cache_file_name` in map, filter, sort, shuffle - `seed` and `generator` in shuffle
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/430/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/430/timeline
closed
false
430
null
2020-07-29T09:06:22Z
null
true
664,412,137
https://api.github.com/repos/huggingface/datasets/issues/429
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/429/events
[]
null
2020-07-31T11:46:20Z
[]
https://github.com/huggingface/datasets/pull/429
CONTRIBUTOR
null
false
null
[]
mlsum
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/429/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU1NjU2MDk5
{ "diff_url": "https://github.com/huggingface/datasets/pull/429.diff", "html_url": "https://github.com/huggingface/datasets/pull/429", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/429.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/429" }
2020-07-23T11:52:39Z
https://api.github.com/repos/huggingface/datasets/issues/429/comments
Hello, The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https://gitlab.lip6.fr/scialom/mlsum_data
{ "avatar_url": "https://avatars.githubusercontent.com/u/36986299?v=4", "events_url": "https://api.github.com/users/RachelKer/events{/privacy}", "followers_url": "https://api.github.com/users/RachelKer/followers", "following_url": "https://api.github.com/users/RachelKer/following{/other_user}", "gists_url": "https://api.github.com/users/RachelKer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RachelKer", "id": 36986299, "login": "RachelKer", "node_id": "MDQ6VXNlcjM2OTg2Mjk5", "organizations_url": "https://api.github.com/users/RachelKer/orgs", "received_events_url": "https://api.github.com/users/RachelKer/received_events", "repos_url": "https://api.github.com/users/RachelKer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RachelKer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RachelKer/subscriptions", "type": "User", "url": "https://api.github.com/users/RachelKer" }
https://api.github.com/repos/huggingface/datasets/issues/429/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/429/timeline
closed
false
429
null
2020-07-31T11:46:20Z
null
true
664,367,086
https://api.github.com/repos/huggingface/datasets/issues/428
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/428/events
[]
null
2020-07-23T10:35:00Z
[]
https://github.com/huggingface/datasets/pull/428
MEMBER
null
false
null
[]
fix concatenate_datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/428/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU1NjE3Nzcy
{ "diff_url": "https://github.com/huggingface/datasets/pull/428.diff", "html_url": "https://github.com/huggingface/datasets/pull/428", "merged_at": "2020-07-23T10:34:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/428.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/428" }
2020-07-23T10:30:59Z
https://api.github.com/repos/huggingface/datasets/issues/428/comments
`concatenate_datatsets` used to test that the different`nlp.Dataset.schema` match, but this attribute was removed in #423
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/428/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/428/timeline
closed
false
428
null
2020-07-23T10:34:58Z
null
true
664,341,623
https://api.github.com/repos/huggingface/datasets/issues/427
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/427/events
[]
null
2020-07-23T13:09:30Z
[]
https://github.com/huggingface/datasets/pull/427
MEMBER
null
false
null
[]
Allow sequence features for beam + add processed Natural Questions
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 3, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/427/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU1NTk1Nzc3
{ "diff_url": "https://github.com/huggingface/datasets/pull/427.diff", "html_url": "https://github.com/huggingface/datasets/pull/427", "merged_at": "2020-07-23T13:09:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/427.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/427" }
2020-07-23T09:52:41Z
https://api.github.com/repos/huggingface/datasets/issues/427/comments
## Allow Sequence features for Beam Datasets + add Natural Questions ### The issue The steps of beam datasets processing is the following: - download the source files and send them in a remote storage (gcs) - process the files using a beam runner (dataflow) - save output in remote storage (gcs) - convert output to arrow in remote storage (gcs) However it wasn't possible to process `natural_questions` because apache beam's processing outputs parquet files, and it's not yet possible to read parquet files with list features. ### The proposed solution To allow sequence features for beam I added a workaround that serializes the values using `json.dumps`, so that we end up with strings instead of the original features. Then when the arrow file is created, the serialized objects are transformed back to normal with `json.loads`. Not sure if there's a better way to do it. ### Natural Questions I was able to process NQ with it, and so I added the json infos file in this PR too. The processed arrow files are also stored in gcs. It allows you to load NQ with ```python from nlp import load_dataset nq = load_dataset("natural_questions") # download the 90GB arrow files from gcs and return the dataset ``` ### Tests I added a test case to make sure it works as expected. Note that the CI will fail because I am updating `natural_questions.py`: it's not synced with the script on S3. It will be synced as soon as this PR is merged. ``` =========================== short test summary info ============================ FAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_natural_questions/default ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/427/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/427/timeline
closed
false
427
null
2020-07-23T13:09:29Z
null
true
664,203,897
https://api.github.com/repos/huggingface/datasets/issues/426
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/426/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2021-03-12T09:34:12Z
[]
https://github.com/huggingface/datasets/issues/426
NONE
completed
null
null
[]
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/426/reactions" }
MDU6SXNzdWU2NjQyMDM4OTc=
null
2020-07-23T05:00:41Z
https://api.github.com/repos/huggingface/datasets/issues/426/comments
It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together?
{ "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/timothyjlaurent", "id": 2000204, "login": "timothyjlaurent", "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "site_admin": false, "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "type": "User", "url": "https://api.github.com/users/timothyjlaurent" }
https://api.github.com/repos/huggingface/datasets/issues/426/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/426/timeline
closed
false
426
null
2020-09-07T14:48:04Z
null
false
664,029,848
https://api.github.com/repos/huggingface/datasets/issues/425
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/425/events
[]
null
2020-08-02T13:30:34Z
[]
https://github.com/huggingface/datasets/issues/425
MEMBER
completed
null
null
[]
Correct data structure for PAN-X task in XTREME dataset?
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/425/reactions" }
MDU6SXNzdWU2NjQwMjk4NDg=
null
2020-07-22T20:29:20Z
https://api.github.com/repos/huggingface/datasets/issues/425/comments
Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') dataset_train = dataset['train'] ``` However, I am not sure that `load_dataset()` is returning the correct data structure for NER. Currently, every row in `dataset_train` is of the form ```python {'word': str, 'ner_tag': str, 'lang': str} ``` but I think we actually want something like ```python {'words': List[str], 'ner_tags': List[str], 'langs': List[str]} ``` so that each row corresponds to a _sequence_ of words associated with each example. With the current data structure I do not think it is possible to transform `dataset_train` into a form suitable for training because we do not know the boundaries between examples. Indeed, [this line](https://github.com/google-research/xtreme/blob/522434d1aece34131d997a97ce7e9242a51a688a/third_party/utils_tag.py#L58) in the XTREME repo, processes the texts as lists of sentences, tags, and languages. ## Proposed solution Replace ```python with open(filepath) as f: data = csv.reader(f, delimiter="\t", quoting=csv.QUOTE_NONE) for id_, row in enumerate(data): if row: lang, word = row[0].split(":")[0], row[0].split(":")[1] tag = row[1] yield id_, {"word": word, "ner_tag": tag, "lang": lang} ``` from [these lines](https://github.com/huggingface/nlp/blob/ce7d3a1d630b78fe27188d1706f3ea980e8eec43/datasets/xtreme/xtreme.py#L881-L887) of the `_generate_examples()` function with something like ```python guid_index = 1 with open(filepath, encoding="utf-8") as f: words = [] ner_tags = [] langs = [] for line in f: if line.startswith("-DOCSTART-") or line == "" or line == "\n": if words: yield guid_index, {"words": words, "ner_tags": ner_tags, "langs": langs} guid_index += 1 words = [] ner_tags = [] else: # pan-x data is tab separated splits = line.split("\t") # strip out en: prefix langs.append(splits[0][:2]) words.append(splits[0][3:]) if len(splits) > 1: labels.append(splits[-1].replace("\n", "")) else: # examples have no label in test set labels.append("O") ``` If you agree, me or @lvwerra would be happy to implement this and create a PR.
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/425/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/425/timeline
closed
false
425
null
2020-08-02T13:30:34Z
null
false
663,858,552
https://api.github.com/repos/huggingface/datasets/issues/424
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/424/events
[]
null
2020-07-23T14:27:58Z
[]
https://github.com/huggingface/datasets/pull/424
CONTRIBUTOR
null
false
null
[]
Web of science
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/424/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU1MTk4MTY0
{ "diff_url": "https://github.com/huggingface/datasets/pull/424.diff", "html_url": "https://github.com/huggingface/datasets/pull/424", "merged_at": "2020-07-23T14:27:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/424.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/424" }
2020-07-22T15:38:31Z
https://api.github.com/repos/huggingface/datasets/issues/424/comments
this PR adds the WebofScience dataset #353
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
https://api.github.com/repos/huggingface/datasets/issues/424/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/424/timeline
closed
false
424
null
2020-07-23T14:27:56Z
null
true
663,079,359
https://api.github.com/repos/huggingface/datasets/issues/423
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/423/events
[]
null
2020-07-25T09:08:34Z
[]
https://github.com/huggingface/datasets/pull/423
MEMBER
null
false
null
[]
Change features vs schema logic
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/423/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU0NTU4OTA0
{ "diff_url": "https://github.com/huggingface/datasets/pull/423.diff", "html_url": "https://github.com/huggingface/datasets/pull/423", "merged_at": "2020-07-23T10:15:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/423.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/423" }
2020-07-21T14:52:47Z
https://api.github.com/repos/huggingface/datasets/issues/423/comments
## New logic for `nlp.Features` in datasets Previously, it was confusing to have `features` and pyarrow's `schema` in `nlp.Dataset`. However `features` is supposed to be the front-facing object to define the different fields of a dataset, while `schema` is only used to write arrow files. Changes: - Remove `schema` field in `nlp.Dataset` - Make `features` the source of truth to read/write examples - `features` can no longer be `None` in `nlp.Dataset` - Update `features` after each dataset transform such as `nlp.Dataset.map` Todo: change the tests to take these changes into account
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/423/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/423/timeline
closed
false
423
null
2020-07-23T10:15:17Z
null
true
663,028,497
https://api.github.com/repos/huggingface/datasets/issues/422
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/422/events
[]
null
2020-07-22T16:02:53Z
[]
https://github.com/huggingface/datasets/pull/422
CONTRIBUTOR
null
false
null
[]
- Corrected encoding for IMDB.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/422/reactions" }
MDExOlB1bGxSZXF1ZXN0NDU0NTE3MDU2
{ "diff_url": "https://github.com/huggingface/datasets/pull/422.diff", "html_url": "https://github.com/huggingface/datasets/pull/422", "merged_at": "2020-07-22T16:02:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/422.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/422" }
2020-07-21T13:46:59Z
https://api.github.com/repos/huggingface/datasets/issues/422/comments
The preparation phase (after the download phase) crashed on windows because of charmap encoding not being able to decode certain characters. This change suggested in Issue #347 fixes it for the IMDB dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/25091538?v=4", "events_url": "https://api.github.com/users/ghazi-f/events{/privacy}", "followers_url": "https://api.github.com/users/ghazi-f/followers", "following_url": "https://api.github.com/users/ghazi-f/following{/other_user}", "gists_url": "https://api.github.com/users/ghazi-f/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghazi-f", "id": 25091538, "login": "ghazi-f", "node_id": "MDQ6VXNlcjI1MDkxNTM4", "organizations_url": "https://api.github.com/users/ghazi-f/orgs", "received_events_url": "https://api.github.com/users/ghazi-f/received_events", "repos_url": "https://api.github.com/users/ghazi-f/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghazi-f/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghazi-f/subscriptions", "type": "User", "url": "https://api.github.com/users/ghazi-f" }
https://api.github.com/repos/huggingface/datasets/issues/422/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/422/timeline
closed
false
422
null
2020-07-22T16:02:53Z
null
true
662,213,864
https://api.github.com/repos/huggingface/datasets/issues/421
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/421/events
[]
null
2020-07-22T16:08:40Z
[]
https://github.com/huggingface/datasets/pull/421
CONTRIBUTOR
null
false
null
[]
Style change
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/421/reactions" }
MDExOlB1bGxSZXF1ZXN0NDUzNzkzMzQ1
{ "diff_url": "https://github.com/huggingface/datasets/pull/421.diff", "html_url": "https://github.com/huggingface/datasets/pull/421", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/421.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/421" }
2020-07-20T20:08:29Z
https://api.github.com/repos/huggingface/datasets/issues/421/comments
make quality and make style ran on scripts
{ "avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4", "events_url": "https://api.github.com/users/lordtt13/events{/privacy}", "followers_url": "https://api.github.com/users/lordtt13/followers", "following_url": "https://api.github.com/users/lordtt13/following{/other_user}", "gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lordtt13", "id": 35500534, "login": "lordtt13", "node_id": "MDQ6VXNlcjM1NTAwNTM0", "organizations_url": "https://api.github.com/users/lordtt13/orgs", "received_events_url": "https://api.github.com/users/lordtt13/received_events", "repos_url": "https://api.github.com/users/lordtt13/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions", "type": "User", "url": "https://api.github.com/users/lordtt13" }
https://api.github.com/repos/huggingface/datasets/issues/421/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/421/timeline
closed
false
421
null
2020-07-22T16:08:39Z
null
true
662,029,782
https://api.github.com/repos/huggingface/datasets/issues/420
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/420/events
[]
null
2020-07-21T08:20:49Z
[]
https://github.com/huggingface/datasets/pull/420
MEMBER
null
false
null
[]
Better handle nested features
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/420/reactions" }
MDExOlB1bGxSZXF1ZXN0NDUzNjI5OTk2
{ "diff_url": "https://github.com/huggingface/datasets/pull/420.diff", "html_url": "https://github.com/huggingface/datasets/pull/420", "merged_at": "2020-07-21T08:09:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/420.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/420" }
2020-07-20T16:44:13Z
https://api.github.com/repos/huggingface/datasets/issues/420/comments
Changes: - added arrow schema to features conversion (it's going to be useful to fix #342 ) - make flatten handle deep features (useful for tfrecords conversion in #339 ) - add tests for flatten and features conversions - the reader now returns the kwargs to instantiate a Dataset (fix circular dependencies)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/420/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/420/timeline
closed
false
420
null
2020-07-21T08:09:52Z
null
true
661,974,747
https://api.github.com/repos/huggingface/datasets/issues/419
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/419/events
[]
null
2020-07-24T08:22:01Z
[]
https://github.com/huggingface/datasets/pull/419
CONTRIBUTOR
null
false
null
[]
EmoContext dataset add
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/419/reactions" }
MDExOlB1bGxSZXF1ZXN0NDUzNTgxNzQz
{ "diff_url": "https://github.com/huggingface/datasets/pull/419.diff", "html_url": "https://github.com/huggingface/datasets/pull/419", "merged_at": "2020-07-24T08:22:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/419.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/419" }
2020-07-20T15:48:45Z
https://api.github.com/repos/huggingface/datasets/issues/419/comments
EmoContext Dataset add Signed-off-by: lordtt13 <thakurtanmay72@yahoo.com>
{ "avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4", "events_url": "https://api.github.com/users/lordtt13/events{/privacy}", "followers_url": "https://api.github.com/users/lordtt13/followers", "following_url": "https://api.github.com/users/lordtt13/following{/other_user}", "gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lordtt13", "id": 35500534, "login": "lordtt13", "node_id": "MDQ6VXNlcjM1NTAwNTM0", "organizations_url": "https://api.github.com/users/lordtt13/orgs", "received_events_url": "https://api.github.com/users/lordtt13/received_events", "repos_url": "https://api.github.com/users/lordtt13/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions", "type": "User", "url": "https://api.github.com/users/lordtt13" }
https://api.github.com/repos/huggingface/datasets/issues/419/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/419/timeline
closed
false
419
null
2020-07-24T08:22:00Z
null
true
661,914,873
https://api.github.com/repos/huggingface/datasets/issues/418
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/418/events
[]
null
2020-07-20T15:39:32Z
[]
https://github.com/huggingface/datasets/issues/418
CONTRIBUTOR
completed
null
null
[]
Addition of google drive links to dl_manager
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/418/reactions" }
MDU6SXNzdWU2NjE5MTQ4NzM=
null
2020-07-20T14:52:02Z
https://api.github.com/repos/huggingface/datasets/issues/418/comments
Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown. This is the script for me: ```python class EmoConfig(nlp.BuilderConfig): """BuilderConfig for SQUAD.""" def __init__(self, **kwargs): """BuilderConfig for EmoContext. Args: **kwargs: keyword arguments forwarded to super. """ super(EmoConfig, self).__init__(**kwargs) _TEST_URL = "https://drive.google.com/file/d/1Hn5ytHSSoGOC4sjm3wYy0Dh0oY_oXBbb/view?usp=sharing" _TRAIN_URL = "https://drive.google.com/file/d/12Uz59TYg_NtxOy7SXraYeXPMRT7oaO7X/view?usp=sharing" class EmoDataset(nlp.GeneratorBasedBuilder): """ SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text. Version 1.0.0 """ VERSION = nlp.Version("1.0.0") force = False def _info(self): return nlp.DatasetInfo( description=_DESCRIPTION, features=nlp.Features( { "text": nlp.Value("string"), "label": nlp.features.ClassLabel(names=["others", "happy", "sad", "angry"]), } ), supervised_keys=None, homepage="https://www.aclweb.org/anthology/S19-2005/", citation=_CITATION, ) def _get_drive_url(self, url): base_url = 'https://drive.google.com/uc?id=' split_url = url.split('/') return base_url + split_url[5] def _split_generators(self, dl_manager): """Returns SplitGenerators.""" if(not os.path.exists("emo-train.json") or self.force): gdown.download(self._get_drive_url(_TRAIN_URL), "emo-train.json", quiet = True) if(not os.path.exists("emo-test.json") or self.force): gdown.download(self._get_drive_url(_TEST_URL), "emo-test.json", quiet = True) return [ nlp.SplitGenerator( name=nlp.Split.TRAIN, gen_kwargs={ "filepath": "emo-train.json", "split": "train", }, ), nlp.SplitGenerator( name=nlp.Split.TEST, gen_kwargs={"filepath": "emo-test.json", "split": "test"}, ), ] def _generate_examples(self, filepath, split): """ Yields examples. """ with open(filepath, 'rb') as f: data = json.load(f) for id_, text, label in zip(data["text"].keys(), data["text"].values(), data["Label"].values()): yield id_, { "text": text, "label": label, } ``` Can someone help me in adding gdrive links to be used with default dl_manager or adding gdown as another dl_manager, because I'd like to add this dataset to nlp's official database.
{ "avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4", "events_url": "https://api.github.com/users/lordtt13/events{/privacy}", "followers_url": "https://api.github.com/users/lordtt13/followers", "following_url": "https://api.github.com/users/lordtt13/following{/other_user}", "gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lordtt13", "id": 35500534, "login": "lordtt13", "node_id": "MDQ6VXNlcjM1NTAwNTM0", "organizations_url": "https://api.github.com/users/lordtt13/orgs", "received_events_url": "https://api.github.com/users/lordtt13/received_events", "repos_url": "https://api.github.com/users/lordtt13/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions", "type": "User", "url": "https://api.github.com/users/lordtt13" }
https://api.github.com/repos/huggingface/datasets/issues/418/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/418/timeline
closed
false
418
null
2020-07-20T15:39:32Z
null
false
661,804,054
https://api.github.com/repos/huggingface/datasets/issues/417
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/417/events
[]
null
2020-07-22T09:51:00Z
[]
https://github.com/huggingface/datasets/pull/417
MEMBER
null
false
null
[]
Fix docstrins multiple metrics instances
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/417/reactions" }
MDExOlB1bGxSZXF1ZXN0NDUzNDMyODE5
{ "diff_url": "https://github.com/huggingface/datasets/pull/417.diff", "html_url": "https://github.com/huggingface/datasets/pull/417", "merged_at": "2020-07-22T09:50:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/417.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/417" }
2020-07-20T13:08:59Z
https://api.github.com/repos/huggingface/datasets/issues/417/comments
We change the docstrings of `nlp.Metric.compute`, `nlp.Metric.add` and `nlp.Metric.add_batch` depending on which metric is instantiated. However we had issues when instantiating multiple metrics (docstrings were duplicated). This should fix #304
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/417/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/417/timeline
closed
false
417
null
2020-07-22T09:50:59Z
null
true
661,635,393
https://api.github.com/repos/huggingface/datasets/issues/416
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/416/events
[]
null
2020-07-21T08:15:46Z
[]
https://github.com/huggingface/datasets/pull/416
MEMBER
null
false
null
[]
Fix xtreme panx directory
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/416/reactions" }
MDExOlB1bGxSZXF1ZXN0NDUzMjg1NTM4
{ "diff_url": "https://github.com/huggingface/datasets/pull/416.diff", "html_url": "https://github.com/huggingface/datasets/pull/416", "merged_at": "2020-07-21T08:15:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/416.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/416" }
2020-07-20T10:09:17Z
https://api.github.com/repos/huggingface/datasets/issues/416/comments
Fix #412
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/416/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/416/timeline
closed
false
416
null
2020-07-21T08:15:44Z
null
true
660,687,076
https://api.github.com/repos/huggingface/datasets/issues/415
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/415/events
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
null
2020-07-20T09:54:26Z
[]
https://github.com/huggingface/datasets/issues/415
NONE
null
null
null
[]
Something is wrong with WMT 19 kk-en dataset
{ "+1": 0, "-1": 0, "confused": 1, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/415/reactions" }
MDU6SXNzdWU2NjA2ODcwNzY=
null
2020-07-19T08:18:51Z
https://api.github.com/repos/huggingface/datasets/issues/415/comments
The translation in the `train` set does not look right: ``` >>>import nlp >>>from nlp import load_dataset >>>dataset = load_dataset('wmt19', 'kk-en') >>>dataset["train"]["translation"][0] {'kk': 'Trumpian Uncertainty', 'en': 'Трамптық белгісіздік'} >>>dataset["validation"]["translation"][0] {'kk': 'Ақша-несие саясатының сценарийін қайта жазсақ', 'en': 'Rewriting the Monetary-Policy Script'} ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/32014649?v=4", "events_url": "https://api.github.com/users/ChenghaoMou/events{/privacy}", "followers_url": "https://api.github.com/users/ChenghaoMou/followers", "following_url": "https://api.github.com/users/ChenghaoMou/following{/other_user}", "gists_url": "https://api.github.com/users/ChenghaoMou/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ChenghaoMou", "id": 32014649, "login": "ChenghaoMou", "node_id": "MDQ6VXNlcjMyMDE0NjQ5", "organizations_url": "https://api.github.com/users/ChenghaoMou/orgs", "received_events_url": "https://api.github.com/users/ChenghaoMou/received_events", "repos_url": "https://api.github.com/users/ChenghaoMou/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ChenghaoMou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ChenghaoMou/subscriptions", "type": "User", "url": "https://api.github.com/users/ChenghaoMou" }
https://api.github.com/repos/huggingface/datasets/issues/415/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/415/timeline
open
false
415
null
null
null
false
660,654,013
https://api.github.com/repos/huggingface/datasets/issues/414
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/414/events
[]
null
2020-07-21T02:21:17Z
[]
https://github.com/huggingface/datasets/issues/414
NONE
completed
null
null
[]
from_dict delete?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/414/reactions" }
MDU6SXNzdWU2NjA2NTQwMTM=
null
2020-07-19T07:08:36Z
https://api.github.com/repos/huggingface/datasets/issues/414/comments
AttributeError: type object 'Dataset' has no attribute 'from_dict'
{ "avatar_url": "https://avatars.githubusercontent.com/u/22817243?v=4", "events_url": "https://api.github.com/users/hackerxiaobai/events{/privacy}", "followers_url": "https://api.github.com/users/hackerxiaobai/followers", "following_url": "https://api.github.com/users/hackerxiaobai/following{/other_user}", "gists_url": "https://api.github.com/users/hackerxiaobai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hackerxiaobai", "id": 22817243, "login": "hackerxiaobai", "node_id": "MDQ6VXNlcjIyODE3MjQz", "organizations_url": "https://api.github.com/users/hackerxiaobai/orgs", "received_events_url": "https://api.github.com/users/hackerxiaobai/received_events", "repos_url": "https://api.github.com/users/hackerxiaobai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hackerxiaobai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hackerxiaobai/subscriptions", "type": "User", "url": "https://api.github.com/users/hackerxiaobai" }
https://api.github.com/repos/huggingface/datasets/issues/414/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/414/timeline
closed
false
414
null
2020-07-21T02:21:17Z
null
false
660,063,655
https://api.github.com/repos/huggingface/datasets/issues/413
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/413/events
[]
null
2022-02-11T09:50:21Z
[]
https://github.com/huggingface/datasets/issues/413
NONE
completed
null
null
[]
Is there a way to download only NQ dev?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/413/reactions" }
MDU6SXNzdWU2NjAwNjM2NTU=
null
2020-07-18T10:28:23Z
https://api.github.com/repos/huggingface/datasets/issues/413/comments
Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. I tried ``` dataset = nlp.load_dataset('natural_questions', split="validation", beam_runner="DirectRunner") ``` But this still triggered a big download of presumably the whole dataset. Is there any way of doing this or are splits / slicing options only available after downloading? Thanks!
{ "avatar_url": "https://avatars.githubusercontent.com/u/1563902?v=4", "events_url": "https://api.github.com/users/tholor/events{/privacy}", "followers_url": "https://api.github.com/users/tholor/followers", "following_url": "https://api.github.com/users/tholor/following{/other_user}", "gists_url": "https://api.github.com/users/tholor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tholor", "id": 1563902, "login": "tholor", "node_id": "MDQ6VXNlcjE1NjM5MDI=", "organizations_url": "https://api.github.com/users/tholor/orgs", "received_events_url": "https://api.github.com/users/tholor/received_events", "repos_url": "https://api.github.com/users/tholor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tholor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tholor/subscriptions", "type": "User", "url": "https://api.github.com/users/tholor" }
https://api.github.com/repos/huggingface/datasets/issues/413/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/413/timeline
closed
false
413
null
2022-02-11T09:50:21Z
null
false
660,047,139
https://api.github.com/repos/huggingface/datasets/issues/412
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/412/events
[]
null
2020-07-21T08:15:44Z
[]
https://github.com/huggingface/datasets/issues/412
MEMBER
completed
null
null
[]
Unable to load XTREME dataset from disk
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/412/reactions" }
MDU6SXNzdWU2NjAwNDcxMzk=
null
2020-07-18T09:55:00Z
https://api.github.com/repos/huggingface/datasets/issues/412/comments
Hi 🤗 team! ## Description of the problem Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark. I have manually downloaded the `AmazonPhotos.zip` file from [here](https://www.amazon.com/clouddrive/share/d3KGCRCIYwhKJF0H3eWA26hjg2ZCRhjpEQtDL70FSBN?_encoding=UTF8&%2AVersion%2A=1&%2Aentries%2A=0&mgh=1) and am running into a `FileNotFoundError` when I point to the location of the dataset. As far as I can tell, the problem is that `AmazonPhotos.zip` decompresses to `panx_dataset` and `load_dataset()` is not looking in the correct path: ``` # path where load_dataset is looking for fr.tar.gz /root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/ # path where it actually exists /root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/panx_dataset/ ``` ## Steps to reproduce the problem 1. Manually download the XTREME benchmark from [here](https://www.amazon.com/clouddrive/share/d3KGCRCIYwhKJF0H3eWA26hjg2ZCRhjpEQtDL70FSBN?_encoding=UTF8&%2AVersion%2A=1&%2Aentries%2A=0&mgh=1) 2. Run the following code snippet ```python from nlp import load_dataset # AmazonPhotos.zip is in the root of the folder dataset = load_dataset("xtreme", "PAN-X.fr", data_dir='./') ``` 3. Here is the stack trace ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-4-26786bb5fa93> in <module> ----> 1 dataset = load_dataset("xtreme", "PAN-X.fr", data_dir='./') /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 522 download_mode=download_mode, 523 ignore_verifications=ignore_verifications, --> 524 save_infos=save_infos, 525 ) 526 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 430 verify_infos = not save_infos and not ignore_verifications 431 self._download_and_prepare( --> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 433 ) 434 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 464 split_dict = SplitDict(dataset_name=self.name) 465 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 466 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 467 # Checksums verification 468 if verify_infos: /usr/local/lib/python3.6/dist-packages/nlp/datasets/xtreme/b8c2ed3583a7a7ac60b503576dfed3271ac86757628897e945bd329c43b8a746/xtreme.py in _split_generators(self, dl_manager) 725 panx_dl_dir = dl_manager.extract(panx_path) 726 lang = self.config.name.split(".")[1] --> 727 lang_folder = dl_manager.extract(os.path.join(panx_dl_dir, lang + ".tar.gz")) 728 return [ 729 nlp.SplitGenerator( /usr/local/lib/python3.6/dist-packages/nlp/utils/download_manager.py in extract(self, path_or_paths) 196 """ 197 return map_nested( --> 198 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths, 199 ) 200 /usr/local/lib/python3.6/dist-packages/nlp/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_tuple) 170 return tuple(mapped) 171 # Singleton --> 172 return function(data_struct) 173 174 /usr/local/lib/python3.6/dist-packages/nlp/utils/download_manager.py in <lambda>(path) 196 """ 197 return map_nested( --> 198 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths, 199 ) 200 /usr/local/lib/python3.6/dist-packages/nlp/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 203 elif urlparse(url_or_filename).scheme == "": 204 # File, but it doesn't exist. --> 205 raise FileNotFoundError("Local file {} doesn't exist".format(url_or_filename)) 206 else: 207 # Something unknown FileNotFoundError: Local file /root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/fr.tar.gz doesn't exist ``` ## OS and hardware ``` - `nlp` version: 0.3.0 - Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.1.0 (True) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/412/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/412/timeline
closed
false
412
null
2020-07-21T08:15:44Z
null
false
659,393,398
https://api.github.com/repos/huggingface/datasets/issues/411
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/411/events
[]
null
2020-07-21T09:13:46Z
[]
https://github.com/huggingface/datasets/pull/411
CONTRIBUTOR
null
false
null
[]
Sbf
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/411/reactions" }
MDExOlB1bGxSZXF1ZXN0NDUxMjQxOTQy
{ "diff_url": "https://github.com/huggingface/datasets/pull/411.diff", "html_url": "https://github.com/huggingface/datasets/pull/411", "merged_at": "2020-07-21T09:13:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/411.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/411" }
2020-07-17T16:19:45Z
https://api.github.com/repos/huggingface/datasets/issues/411/comments
This PR adds the Social Bias Frames Dataset (ACL 2020) . dataset homepage: https://homes.cs.washington.edu/~msap/social-bias-frames/
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
https://api.github.com/repos/huggingface/datasets/issues/411/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/411/timeline
closed
false
411
null
2020-07-21T09:13:45Z
null
true
659,242,871
https://api.github.com/repos/huggingface/datasets/issues/410
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/410/events
[]
null
2020-07-20T07:05:29Z
[]
https://github.com/huggingface/datasets/pull/410
CONTRIBUTOR
null
false
null
[]
20newsgroup
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/410/reactions" }
MDExOlB1bGxSZXF1ZXN0NDUxMTEzMTI3
{ "diff_url": "https://github.com/huggingface/datasets/pull/410.diff", "html_url": "https://github.com/huggingface/datasets/pull/410", "merged_at": "2020-07-20T07:05:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/410.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/410" }
2020-07-17T13:07:57Z
https://api.github.com/repos/huggingface/datasets/issues/410/comments
Add 20Newsgroup dataset. #353
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
https://api.github.com/repos/huggingface/datasets/issues/410/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/410/timeline
closed
false
410
null
2020-07-20T07:05:28Z
null
true
659,128,611
https://api.github.com/repos/huggingface/datasets/issues/409
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/409/events
[]
null
2020-07-21T14:34:52Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
https://github.com/huggingface/datasets/issues/409
NONE
completed
null
null
[]
train_test_split error: 'dict' object has no attribute 'deepcopy'
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/409/reactions" }
MDU6SXNzdWU2NTkxMjg2MTE=
null
2020-07-17T10:36:28Z
https://api.github.com/repos/huggingface/datasets/issues/409/comments
`train_test_split` is giving me an error when I try and call it: `'dict' object has no attribute 'deepcopy'` ## To reproduce ``` dataset = load_dataset('glue', 'mrpc', split='train') dataset = dataset.train_test_split(test_size=0.2) ``` ## Full Stacktrace ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-12-feb740dbec9a> in <module> 1 dataset = load_dataset('glue', 'mrpc', split='train') ----> 2 dataset = dataset.train_test_split(test_size=0.2) ~/anaconda3/envs/fastai2_me/lib/python3.7/site-packages/nlp/arrow_dataset.py in train_test_split(self, test_size, train_size, shuffle, seed, generator, keep_in_memory, load_from_cache_file, train_cache_file_name, test_cache_file_name, writer_batch_size) 1032 "writer_batch_size": writer_batch_size, 1033 } -> 1034 train_kwargs = cache_kwargs.deepcopy() 1035 train_kwargs["split"] = "train" 1036 test_kwargs = cache_kwargs.deepcopy() AttributeError: 'dict' object has no attribute 'deepcopy' ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/20516801?v=4", "events_url": "https://api.github.com/users/morganmcg1/events{/privacy}", "followers_url": "https://api.github.com/users/morganmcg1/followers", "following_url": "https://api.github.com/users/morganmcg1/following{/other_user}", "gists_url": "https://api.github.com/users/morganmcg1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/morganmcg1", "id": 20516801, "login": "morganmcg1", "node_id": "MDQ6VXNlcjIwNTE2ODAx", "organizations_url": "https://api.github.com/users/morganmcg1/orgs", "received_events_url": "https://api.github.com/users/morganmcg1/received_events", "repos_url": "https://api.github.com/users/morganmcg1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/morganmcg1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/morganmcg1/subscriptions", "type": "User", "url": "https://api.github.com/users/morganmcg1" }
https://api.github.com/repos/huggingface/datasets/issues/409/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/409/timeline
closed
false
409
null
2020-07-21T14:34:52Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
false
659,064,144
https://api.github.com/repos/huggingface/datasets/issues/408
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/408/events
[]
null
2020-07-17T09:26:57Z
[]
https://github.com/huggingface/datasets/pull/408
MEMBER
null
false
null
[]
Add tests datasets gcp
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/408/reactions" }
MDExOlB1bGxSZXF1ZXN0NDUwOTU1MTE0
{ "diff_url": "https://github.com/huggingface/datasets/pull/408.diff", "html_url": "https://github.com/huggingface/datasets/pull/408", "merged_at": "2020-07-17T09:26:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/408.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/408" }
2020-07-17T09:23:27Z
https://api.github.com/repos/huggingface/datasets/issues/408/comments
Some datasets are available on our google cloud storage in arrow format, so that the users don't need to process the data. These tests make sure that they're always available. It also makes sure that their scripts are in sync between S3 and the repo. This should avoid future issues like #407
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/408/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/408/timeline
closed
false
408
null
2020-07-17T09:26:56Z
null
true
658,672,736
https://api.github.com/repos/huggingface/datasets/issues/407
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/407/events
[]
null
2021-01-12T11:41:16Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
https://github.com/huggingface/datasets/issues/407
CONTRIBUTOR
completed
null
null
[]
MissingBeamOptions for Wikipedia 20200501.en
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/407/reactions" }
MDU6SXNzdWU2NTg2NzI3MzY=
null
2020-07-16T23:48:03Z
https://api.github.com/repos/huggingface/datasets/issues/407/comments
There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available): ``` nlp.load_dataset('wikipedia', "20200501.en", split='train') ``` And now, having pulled master, I get: ``` Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, total: 34.06 GiB) to /home/hltcoe/mgordon/.cache/huggingface/datasets/wikipedia/20200501.en/1.0.0/76b0b2747b679bb0ee7a1621e50e5a6378477add0c662668a324a5bc07d516dd... Traceback (most recent call last): File "scripts/download.py", line 11, in <module> fire.Fire(download_pretrain) File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 138, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 468, in _Fire target=component.__name__) File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 672, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "scripts/download.py", line 6, in download_pretrain nlp.load_dataset('wikipedia', "20200501.en", split='train') File "/exp/mgordon/nlp/src/nlp/load.py", line 534, in load_dataset save_infos=save_infos, File "/exp/mgordon/nlp/src/nlp/builder.py", line 460, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/exp/mgordon/nlp/src/nlp/builder.py", line 870, in _download_and_prepare "\n\t`{}`".format(usage_example) nlp.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, S park, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20200501.en', beam_runner='DirectRunner')` ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4", "events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}", "followers_url": "https://api.github.com/users/mitchellgordon95/followers", "following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}", "gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mitchellgordon95", "id": 7490438, "login": "mitchellgordon95", "node_id": "MDQ6VXNlcjc0OTA0Mzg=", "organizations_url": "https://api.github.com/users/mitchellgordon95/orgs", "received_events_url": "https://api.github.com/users/mitchellgordon95/received_events", "repos_url": "https://api.github.com/users/mitchellgordon95/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions", "type": "User", "url": "https://api.github.com/users/mitchellgordon95" }
https://api.github.com/repos/huggingface/datasets/issues/407/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/407/timeline
closed
false
407
null
2020-07-17T14:24:28Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
false
658,581,764
https://api.github.com/repos/huggingface/datasets/issues/406
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/406/events
[]
null
2023-08-16T09:52:39Z
[]
https://github.com/huggingface/datasets/issues/406
CONTRIBUTOR
completed
null
null
[]
Faster Shuffling?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/406/reactions" }
MDU6SXNzdWU2NTg1ODE3NjQ=
null
2020-07-16T21:21:53Z
https://api.github.com/repos/huggingface/datasets/issues/406/comments
Consider shuffling bookcorpus: ``` dataset = nlp.load_dataset('bookcorpus', split='train') dataset.shuffle() ``` According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`. But I can also just write the lines to a text file: ``` batch_size = 100000 with open('tmp.txt', 'w+') as out_f: for i in tqdm(range(0, len(dataset), batch_size)): batch = dataset[i:i+batch_size]['text'] print("\n".join(batch), file=out_f) ``` Which completes in a couple minutes, followed by `shuf tmp.txt > tmp2.txt` which completes in under a minute. And finally, ``` dataset = nlp.load_dataset('text', data_files='tmp2.txt') ``` Which completes in under 10 minutes. I read up on Apache Arrow this morning, and it seems like the columnar data format is not especially well-suited to shuffling rows, since moving items around requires a lot of book-keeping. Is shuffle inherently slow, or am I just using it wrong? And if it is slow, would it make sense to try converting the data to a row-based format on disk and then shuffling? (Instead of calling select with a random permutation, as is currently done.)
{ "avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4", "events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}", "followers_url": "https://api.github.com/users/mitchellgordon95/followers", "following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}", "gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mitchellgordon95", "id": 7490438, "login": "mitchellgordon95", "node_id": "MDQ6VXNlcjc0OTA0Mzg=", "organizations_url": "https://api.github.com/users/mitchellgordon95/orgs", "received_events_url": "https://api.github.com/users/mitchellgordon95/received_events", "repos_url": "https://api.github.com/users/mitchellgordon95/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions", "type": "User", "url": "https://api.github.com/users/mitchellgordon95" }
https://api.github.com/repos/huggingface/datasets/issues/406/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/406/timeline
closed
false
406
null
2020-09-07T14:45:25Z
null
false
658,580,192
https://api.github.com/repos/huggingface/datasets/issues/405
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/405/events
[]
null
2020-07-17T17:05:44Z
[]
https://github.com/huggingface/datasets/pull/405
CONTRIBUTOR
null
false
null
[]
Make select() faster by batching reads
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/405/reactions" }
MDExOlB1bGxSZXF1ZXN0NDUwNTI1MTc3
{ "diff_url": "https://github.com/huggingface/datasets/pull/405.diff", "html_url": "https://github.com/huggingface/datasets/pull/405", "merged_at": "2020-07-17T16:51:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/405.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/405" }
2020-07-16T21:19:45Z
https://api.github.com/repos/huggingface/datasets/issues/405/comments
Here's a benchmark: ``` dataset = nlp.load_dataset('bookcorpus', split='train') start = time.time() dataset.select(np.arange(1000), reader_batch_size=1, load_from_cache_file=False) end = time.time() print(f'{end - start}') start = time.time() dataset.select(np.arange(1000), reader_batch_size=1000, load_from_cache_file=False) end = time.time() print(f'{end - start}') ``` Without batching, select takes around 1.27 seconds. With batching, it takes around 0.01 seconds. The slowness was upsetting me because dataset.shuffle() was supposed to take ~27 hours for bookcorpus. Now with the fix it takes ~2.5 hours (which still is pretty slow, but I'll open a separate issue for that).
{ "avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4", "events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}", "followers_url": "https://api.github.com/users/mitchellgordon95/followers", "following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}", "gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mitchellgordon95", "id": 7490438, "login": "mitchellgordon95", "node_id": "MDQ6VXNlcjc0OTA0Mzg=", "organizations_url": "https://api.github.com/users/mitchellgordon95/orgs", "received_events_url": "https://api.github.com/users/mitchellgordon95/received_events", "repos_url": "https://api.github.com/users/mitchellgordon95/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions", "type": "User", "url": "https://api.github.com/users/mitchellgordon95" }
https://api.github.com/repos/huggingface/datasets/issues/405/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/405/timeline
closed
false
405
null
2020-07-17T16:51:26Z
null
true
658,400,987
https://api.github.com/repos/huggingface/datasets/issues/404
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/404/events
[]
null
2020-07-20T10:12:35Z
[]
https://github.com/huggingface/datasets/pull/404
MEMBER
null
false
null
[]
Add seed in metrics
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/404/reactions" }
MDExOlB1bGxSZXF1ZXN0NDUwMzY4Mjg4
{ "diff_url": "https://github.com/huggingface/datasets/pull/404.diff", "html_url": "https://github.com/huggingface/datasets/pull/404", "merged_at": "2020-07-20T10:12:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/404.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/404" }
2020-07-16T17:27:05Z
https://api.github.com/repos/huggingface/datasets/issues/404/comments
With #361 we noticed that some metrics were not deterministic. In this PR I allow the user to specify numpy's seed when instantiating a metric with `load_metric`. The seed is set only when `compute` is called, and reset afterwards. Moreover when calling `compute` with the same metric instance (i.e. same experiment_id), the metric will always return the same results given the same inputs. This is the case even if the seed is was not specified by the user, as the previous seed is going to be reused. However, instantiating twice a metric (two different experiments) without specifying a seed can create different results.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/404/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/404/timeline
closed
false
404
null
2020-07-20T10:12:34Z
null
true
658,325,756
https://api.github.com/repos/huggingface/datasets/issues/403
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/403/events
[]
null
2020-07-17T11:37:01Z
[]
https://github.com/huggingface/datasets/pull/403
MEMBER
null
false
null
[]
return python objects instead of arrays by default
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/403/reactions" }
MDExOlB1bGxSZXF1ZXN0NDUwMzAzNjI2
{ "diff_url": "https://github.com/huggingface/datasets/pull/403.diff", "html_url": "https://github.com/huggingface/datasets/pull/403", "merged_at": "2020-07-17T11:37:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/403.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/403" }
2020-07-16T15:51:52Z
https://api.github.com/repos/huggingface/datasets/issues/403/comments
We were using to_pandas() to convert from arrow types, however it returns numpy arrays instead of python lists. I fixed it by using to_pydict/to_pylist instead. Fix #387 It was mentioned in https://github.com/huggingface/transformers/issues/5729
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/403/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/403/timeline
closed
false
403
null
2020-07-17T11:37:00Z
null
true
658,001,288
https://api.github.com/repos/huggingface/datasets/issues/402
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/402/events
[]
null
2020-07-16T14:27:00Z
[]
https://github.com/huggingface/datasets/pull/402
CONTRIBUTOR
null
false
null
[]
Search qa
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/402/reactions" }
MDExOlB1bGxSZXF1ZXN0NDUwMDI2NTE0
{ "diff_url": "https://github.com/huggingface/datasets/pull/402.diff", "html_url": "https://github.com/huggingface/datasets/pull/402", "merged_at": "2020-07-16T14:26:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/402.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/402" }
2020-07-16T09:00:10Z
https://api.github.com/repos/huggingface/datasets/issues/402/comments
add SearchQA dataset #336
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
https://api.github.com/repos/huggingface/datasets/issues/402/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/402/timeline
closed
false
402
null
2020-07-16T14:26:59Z
null
true
657,996,252
https://api.github.com/repos/huggingface/datasets/issues/401
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/401/events
[]
null
2020-08-06T06:16:20Z
[]
https://github.com/huggingface/datasets/pull/401
CONTRIBUTOR
null
false
null
[]
add web_questions
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/401/reactions" }
MDExOlB1bGxSZXF1ZXN0NDUwMDIyNTc0
{ "diff_url": "https://github.com/huggingface/datasets/pull/401.diff", "html_url": "https://github.com/huggingface/datasets/pull/401", "merged_at": "2020-08-06T06:16:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/401.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/401" }
2020-07-16T08:54:59Z
https://api.github.com/repos/huggingface/datasets/issues/401/comments
add Web Question dataset #336 Maybe @patrickvonplaten you can help with the dummy_data structure? it still broken
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
https://api.github.com/repos/huggingface/datasets/issues/401/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/401/timeline
closed
false
401
null
2020-08-06T06:16:19Z
null
true
657,975,600
https://api.github.com/repos/huggingface/datasets/issues/400
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/400/events
[]
null
2020-07-16T08:50:51Z
[]
https://github.com/huggingface/datasets/pull/400
CONTRIBUTOR
null
false
null
[]
Web questions
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/400/reactions" }
MDExOlB1bGxSZXF1ZXN0NDUwMDA1MDU5
{ "diff_url": "https://github.com/huggingface/datasets/pull/400.diff", "html_url": "https://github.com/huggingface/datasets/pull/400", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/400.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/400" }
2020-07-16T08:28:29Z
https://api.github.com/repos/huggingface/datasets/issues/400/comments
add the WebQuestion dataset #336
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
https://api.github.com/repos/huggingface/datasets/issues/400/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/400/timeline
closed
false
400
null
2020-07-16T08:42:54Z
null
true
657,841,433
https://api.github.com/repos/huggingface/datasets/issues/399
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/399/events
[]
null
2020-07-16T06:49:48Z
[]
https://github.com/huggingface/datasets/pull/399
CONTRIBUTOR
null
false
null
[]
Spelling mistake
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/399/reactions" }
MDExOlB1bGxSZXF1ZXN0NDQ5ODkxNTEy
{ "diff_url": "https://github.com/huggingface/datasets/pull/399.diff", "html_url": "https://github.com/huggingface/datasets/pull/399", "merged_at": "2020-07-16T06:49:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/399.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/399" }
2020-07-16T04:37:58Z
https://api.github.com/repos/huggingface/datasets/issues/399/comments
In "Formatting the dataset" part, "The two toehr modifications..." should be "The two other modifications..." ,the word "other" wrong spelled as "toehr".
{ "avatar_url": "https://avatars.githubusercontent.com/u/9410067?v=4", "events_url": "https://api.github.com/users/BlancRay/events{/privacy}", "followers_url": "https://api.github.com/users/BlancRay/followers", "following_url": "https://api.github.com/users/BlancRay/following{/other_user}", "gists_url": "https://api.github.com/users/BlancRay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BlancRay", "id": 9410067, "login": "BlancRay", "node_id": "MDQ6VXNlcjk0MTAwNjc=", "organizations_url": "https://api.github.com/users/BlancRay/orgs", "received_events_url": "https://api.github.com/users/BlancRay/received_events", "repos_url": "https://api.github.com/users/BlancRay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BlancRay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BlancRay/subscriptions", "type": "User", "url": "https://api.github.com/users/BlancRay" }
https://api.github.com/repos/huggingface/datasets/issues/399/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/399/timeline
closed
false
399
null
2020-07-16T06:49:37Z
null
true
657,511,962
https://api.github.com/repos/huggingface/datasets/issues/398
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/398/events
[]
null
2020-07-22T10:14:22Z
[]
https://github.com/huggingface/datasets/pull/398
CONTRIBUTOR
null
false
null
[]
Add inline links
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/398/reactions" }
MDExOlB1bGxSZXF1ZXN0NDQ5NjE1OTk1
{ "diff_url": "https://github.com/huggingface/datasets/pull/398.diff", "html_url": "https://github.com/huggingface/datasets/pull/398", "merged_at": "2020-07-22T10:14:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/398.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/398" }
2020-07-15T17:04:04Z
https://api.github.com/repos/huggingface/datasets/issues/398/comments
Add inline links to `Contributing.md`
{ "avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4", "events_url": "https://api.github.com/users/bharatr21/events{/privacy}", "followers_url": "https://api.github.com/users/bharatr21/followers", "following_url": "https://api.github.com/users/bharatr21/following{/other_user}", "gists_url": "https://api.github.com/users/bharatr21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bharatr21", "id": 13381361, "login": "bharatr21", "node_id": "MDQ6VXNlcjEzMzgxMzYx", "organizations_url": "https://api.github.com/users/bharatr21/orgs", "received_events_url": "https://api.github.com/users/bharatr21/received_events", "repos_url": "https://api.github.com/users/bharatr21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bharatr21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bharatr21/subscriptions", "type": "User", "url": "https://api.github.com/users/bharatr21" }
https://api.github.com/repos/huggingface/datasets/issues/398/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/398/timeline
closed
false
398
null
2020-07-22T10:14:22Z
null
true
657,510,856
https://api.github.com/repos/huggingface/datasets/issues/397
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/397/events
[]
null
2020-07-17T16:59:31Z
[]
https://github.com/huggingface/datasets/pull/397
CONTRIBUTOR
null
false
null
[]
Add contiguous sharding
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/397/reactions" }
MDExOlB1bGxSZXF1ZXN0NDQ5NjE1MDA4
{ "diff_url": "https://github.com/huggingface/datasets/pull/397.diff", "html_url": "https://github.com/huggingface/datasets/pull/397", "merged_at": "2020-07-17T16:59:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/397.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/397" }
2020-07-15T17:02:58Z
https://api.github.com/repos/huggingface/datasets/issues/397/comments
This makes dset.shard() play nice with nlp.concatenate_datasets(). When I originally wrote the shard() method, I was thinking about a distributed training scenario, but https://github.com/huggingface/nlp/pull/389 also uses it for splitting the dataset for distributed preprocessing. Usage: ``` nlp.concatenate_datasets([dset.shard(n, i, contiguous=True) for i in range(n)]) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen" }
https://api.github.com/repos/huggingface/datasets/issues/397/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/397/timeline
closed
false
397
null
2020-07-17T16:59:31Z
null
true
657,477,952
https://api.github.com/repos/huggingface/datasets/issues/396
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/396/events
[]
null
2020-07-16T08:07:32Z
[]
https://github.com/huggingface/datasets/pull/396
MEMBER
null
false
null
[]
Fix memory issue when doing select
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/396/reactions" }
MDExOlB1bGxSZXF1ZXN0NDQ5NTg3MDQ4
{ "diff_url": "https://github.com/huggingface/datasets/pull/396.diff", "html_url": "https://github.com/huggingface/datasets/pull/396", "merged_at": "2020-07-16T08:07:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/396.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/396" }
2020-07-15T16:15:04Z
https://api.github.com/repos/huggingface/datasets/issues/396/comments
We were passing the `nlp.Dataset` object to get the hash for the new dataset's file name. Fix #395
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/396/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/396/timeline
closed
false
396
null
2020-07-16T08:07:31Z
null
true
657,454,983
https://api.github.com/repos/huggingface/datasets/issues/395
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/395/events
[]
null
2020-07-16T08:07:31Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
https://github.com/huggingface/datasets/issues/395
MEMBER
completed
null
null
[]
Memory issue when doing select
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/395/reactions" }
MDU6SXNzdWU2NTc0NTQ5ODM=
null
2020-07-15T15:43:38Z
https://api.github.com/repos/huggingface/datasets/issues/395/comments
As noticed in #389, the following code loads the entire wikipedia in memory. ```python import nlp w = nlp.load_dataset("wikipedia", "20200501.en", split="train") w.select([0]) ``` This is caused by [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626) for some reason, that tries to serialize the function with all the wikipedia data with it. It's not the case with `.map` or `.filter`. However functions that are based on `.select` like `.shuffle`, `.shard`, `.train_test_split`, `.sort` are affected.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/395/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/395/timeline
closed
false
395
null
2020-07-16T08:07:31Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
false
657,425,548
https://api.github.com/repos/huggingface/datasets/issues/394
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/394/events
[]
null
2020-07-16T07:39:52Z
[]
https://github.com/huggingface/datasets/pull/394
CONTRIBUTOR
null
false
null
[]
Remove remaining nested dict
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/394/reactions" }
MDExOlB1bGxSZXF1ZXN0NDQ5NTQzNTE0
{ "diff_url": "https://github.com/huggingface/datasets/pull/394.diff", "html_url": "https://github.com/huggingface/datasets/pull/394", "merged_at": "2020-07-16T07:39:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/394.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/394" }
2020-07-15T15:05:52Z
https://api.github.com/repos/huggingface/datasets/issues/394/comments
This PR deletes the remaining unnecessary nested dict #378
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
https://api.github.com/repos/huggingface/datasets/issues/394/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/394/timeline
closed
false
394
null
2020-07-16T07:39:51Z
null
true
657,330,911
https://api.github.com/repos/huggingface/datasets/issues/393
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/393/events
[]
null
2020-07-17T17:02:16Z
[]
https://github.com/huggingface/datasets/pull/393
MEMBER
null
false
null
[]
Fix extracted files directory for the DownloadManager
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/393/reactions" }
MDExOlB1bGxSZXF1ZXN0NDQ5NDY1MTAz
{ "diff_url": "https://github.com/huggingface/datasets/pull/393.diff", "html_url": "https://github.com/huggingface/datasets/pull/393", "merged_at": "2020-07-17T17:02:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/393.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/393" }
2020-07-15T12:59:55Z
https://api.github.com/repos/huggingface/datasets/issues/393/comments
The cache dir was often cluttered by extracted files because of the download manager. For downloaded files, we are using the `downloads` directory to make things easier to navigate, but extracted files were still placed at the root of the cache directory. To fix that I changed the directory for extracted files to cache_dir/downloads/extracted.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/393/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/393/timeline
closed
false
393
null
2020-07-17T17:02:14Z
null
true
657,313,738
https://api.github.com/repos/huggingface/datasets/issues/392
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/392/events
[]
null
2020-07-21T13:18:36Z
[]
https://github.com/huggingface/datasets/pull/392
CONTRIBUTOR
null
false
null
[]
Style change detection
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/392/reactions" }
MDExOlB1bGxSZXF1ZXN0NDQ5NDUwOTkx
{ "diff_url": "https://github.com/huggingface/datasets/pull/392.diff", "html_url": "https://github.com/huggingface/datasets/pull/392", "merged_at": "2020-07-17T17:13:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/392.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/392" }
2020-07-15T12:32:14Z
https://api.github.com/repos/huggingface/datasets/issues/392/comments
Another [PAN task](https://pan.webis.de/clef20/pan20-web/style-change-detection.html). This time about identifying when the style/author changes in documents. - There's the possibility of adding the [PAN19](https://zenodo.org/record/3577602) and PAN18 style change detection tasks too (these are datasets whose labels are a subset of PAN20's). These would probably make more sense as separate datasets (like wmt is now) - I've converted the integer 0,1 values to a boolean - Using manually downloaded data again. This might be changed at some point following the discussion in https://github.com/huggingface/nlp/pull/349.
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghomasHudson", "id": 13795113, "login": "ghomasHudson", "node_id": "MDQ6VXNlcjEzNzk1MTEz", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "type": "User", "url": "https://api.github.com/users/ghomasHudson" }
https://api.github.com/repos/huggingface/datasets/issues/392/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/392/timeline
closed
false
392
null
2020-07-17T17:13:23Z
null
true
656,956,384
https://api.github.com/repos/huggingface/datasets/issues/390
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/390/events
[]
null
2020-07-22T09:49:58Z
[]
https://github.com/huggingface/datasets/pull/390
CONTRIBUTOR
null
false
null
[]
Concatenate datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/390/reactions" }
MDExOlB1bGxSZXF1ZXN0NDQ5MTYxMzY3
{ "diff_url": "https://github.com/huggingface/datasets/pull/390.diff", "html_url": "https://github.com/huggingface/datasets/pull/390", "merged_at": "2020-07-22T09:49:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/390.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/390" }
2020-07-14T23:24:37Z
https://api.github.com/repos/huggingface/datasets/issues/390/comments
I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema. This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions. Usage: ```python from nlp import Dataset, load_dataset data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]} dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2) dset_concat = Dataset.from_concat([dset1, dset2]) print(dset_concat) # Dataset(schema: {'id': 'int64'}, num_rows: 6) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen" }
https://api.github.com/repos/huggingface/datasets/issues/390/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/390/timeline
closed
false
390
null
2020-07-22T09:49:58Z
null
true
656,921,768
https://api.github.com/repos/huggingface/datasets/issues/389
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/389/events
[]
null
2020-08-04T14:38:10Z
[]
https://github.com/huggingface/datasets/pull/389
CONTRIBUTOR
null
false
null
[]
Fix pickling of SplitDict
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/389/reactions" }
MDExOlB1bGxSZXF1ZXN0NDQ5MTMyOTU5
{ "diff_url": "https://github.com/huggingface/datasets/pull/389.diff", "html_url": "https://github.com/huggingface/datasets/pull/389", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/389.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/389" }
2020-07-14T21:53:39Z
https://api.github.com/repos/huggingface/datasets/issues/389/comments
It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example: ``` wiki = nlp.load_dataset('wikipedia', split='train') def sentencize(examples): ... wiki = wiki.map(sentencize, batched=True) torch.save(wiki, 'sentencized_wiki_dataset.pt') ``` However, upon unpickling the dataset via torch.load(...), this error is raised: ``` ValueError("Cannot add elem. Use .add() instead.") ``` On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class. The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`. Testing: - Manually pickled and unpickled a modified wikipedia dataset. - Ran `make style` I would be happy to run any other tests, but I couldn't find any in the contributing guidelines.
{ "avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4", "events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}", "followers_url": "https://api.github.com/users/mitchellgordon95/followers", "following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}", "gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mitchellgordon95", "id": 7490438, "login": "mitchellgordon95", "node_id": "MDQ6VXNlcjc0OTA0Mzg=", "organizations_url": "https://api.github.com/users/mitchellgordon95/orgs", "received_events_url": "https://api.github.com/users/mitchellgordon95/received_events", "repos_url": "https://api.github.com/users/mitchellgordon95/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions", "type": "User", "url": "https://api.github.com/users/mitchellgordon95" }
https://api.github.com/repos/huggingface/datasets/issues/389/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/389/timeline
closed
false
389
null
2020-08-04T14:38:10Z
null
true
656,707,497
https://api.github.com/repos/huggingface/datasets/issues/388
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/388/events
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
null
2022-10-04T18:01:28Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" } ]
https://github.com/huggingface/datasets/issues/388
NONE
completed
null
null
[]
🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/388/reactions" }
MDU6SXNzdWU2NTY3MDc0OTc=
null
2020-07-14T15:36:41Z
https://api.github.com/repos/huggingface/datasets/issues/388/comments
1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code: ``` nlp.load_dataset('wmt14','de-en') nlp.load_dataset('wmt15','de-en') nlp.load_dataset('wmt17','de-en') nlp.load_dataset('wmt19','de-en') ``` The code runs but the download speed is **extremely slow**, the same behaviour is not observed on `wmt16` and `wmt18` 2. When trying to download `wmt17 zh-en`, I got the following error: > ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-zh.tar.gz
{ "avatar_url": "https://avatars.githubusercontent.com/u/2826602?v=4", "events_url": "https://api.github.com/users/SamuelCahyawijaya/events{/privacy}", "followers_url": "https://api.github.com/users/SamuelCahyawijaya/followers", "following_url": "https://api.github.com/users/SamuelCahyawijaya/following{/other_user}", "gists_url": "https://api.github.com/users/SamuelCahyawijaya/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SamuelCahyawijaya", "id": 2826602, "login": "SamuelCahyawijaya", "node_id": "MDQ6VXNlcjI4MjY2MDI=", "organizations_url": "https://api.github.com/users/SamuelCahyawijaya/orgs", "received_events_url": "https://api.github.com/users/SamuelCahyawijaya/received_events", "repos_url": "https://api.github.com/users/SamuelCahyawijaya/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SamuelCahyawijaya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SamuelCahyawijaya/subscriptions", "type": "User", "url": "https://api.github.com/users/SamuelCahyawijaya" }
https://api.github.com/repos/huggingface/datasets/issues/388/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/388/timeline
closed
false
388
null
2022-10-04T18:01:28Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
false
656,361,357
https://api.github.com/repos/huggingface/datasets/issues/387
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/387/events
[]
null
2020-07-17T11:37:00Z
[]
https://github.com/huggingface/datasets/issues/387
MEMBER
completed
null
null
[]
Conversion through to_pandas output numpy arrays for lists instead of python objects
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/387/reactions" }
MDU6SXNzdWU2NTYzNjEzNTc=
null
2020-07-14T06:24:01Z
https://api.github.com/repos/huggingface/datasets/issues/387/comments
In a related question, the conversion through to_pandas output numpy arrays for the lists instead of python objects. Here is an example: ```python >>> dataset._data.slice(key, 1).to_pandas().to_dict("list") {'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [array([ 101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292, 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938, 4267, 12223, 21811, 1117, 2554, 119, 102])], 'token_type_ids': [array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])], 'attention_mask': [array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])]} >>> type(dataset._data.slice(key, 1).to_pandas().to_dict("list")['input_ids'][0]) <class 'numpy.ndarray'> >>> dataset._data.slice(key, 1).to_pydict() {'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [[101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292, 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938, 4267, 12223, 21811, 1117, 2554, 119, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]} ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
https://api.github.com/repos/huggingface/datasets/issues/387/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/387/timeline
closed
false
387
null
2020-07-17T11:37:00Z
null
false
655,839,067
https://api.github.com/repos/huggingface/datasets/issues/386
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/386/events
[]
null
2020-07-16T08:17:58Z
[]
https://github.com/huggingface/datasets/pull/386
MEMBER
null
false
null
[]
Update dataset loading and features - Add TREC dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/386/reactions" }
MDExOlB1bGxSZXF1ZXN0NDQ4MjQ1NDI4
{ "diff_url": "https://github.com/huggingface/datasets/pull/386.diff", "html_url": "https://github.com/huggingface/datasets/pull/386", "merged_at": "2020-07-16T08:17:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/386.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/386" }
2020-07-13T13:10:18Z
https://api.github.com/repos/huggingface/datasets/issues/386/comments
This PR: - add a template for a new dataset script - update the caching structure so that the path to the cached data files is also a function of the dataset loading script hash. This way when you update a loading script the data will be automatically updated instead of falling back to the previous version (which is usually a outdated). This makes it in particular easier to iterate when writing a new dataset loading script. - fix a bug in the `ClassLabel` feature and make it more flexible so that its methods `str2int` and `int2str` can also accept list, numpy arrays and PyTorch/TensorFlow tensors. - add the TREC-6 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
https://api.github.com/repos/huggingface/datasets/issues/386/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/386/timeline
closed
false
386
null
2020-07-16T08:17:58Z
null
true
655,663,997
https://api.github.com/repos/huggingface/datasets/issues/385
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/385/events
[]
null
2020-07-15T11:27:38Z
[]
https://github.com/huggingface/datasets/pull/385
CONTRIBUTOR
null
false
null
[]
Remove unnecessary nested dict
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/385/reactions" }
MDExOlB1bGxSZXF1ZXN0NDQ4MTAzMjY5
{ "diff_url": "https://github.com/huggingface/datasets/pull/385.diff", "html_url": "https://github.com/huggingface/datasets/pull/385", "merged_at": "2020-07-15T10:03:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/385.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/385" }
2020-07-13T08:46:23Z
https://api.github.com/repos/huggingface/datasets/issues/385/comments
This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated: - MLQA - RACE Will be adding more if necessary. #378
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
https://api.github.com/repos/huggingface/datasets/issues/385/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/385/timeline
closed
false
385
null
2020-07-15T10:03:53Z
null
true
655,291,201
https://api.github.com/repos/huggingface/datasets/issues/383
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/383/events
[]
null
2020-07-16T16:19:46Z
[]
https://github.com/huggingface/datasets/pull/383
CONTRIBUTOR
null
false
null
[]
Adding the Linguistic Code-switching Evaluation (LinCE) benchmark
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/383/reactions" }
MDExOlB1bGxSZXF1ZXN0NDQ3ODI0OTky
{ "diff_url": "https://github.com/huggingface/datasets/pull/383.diff", "html_url": "https://github.com/huggingface/datasets/pull/383", "merged_at": "2020-07-16T16:19:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/383.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/383" }
2020-07-11T22:35:20Z
https://api.github.com/repos/huggingface/datasets/issues/383/comments
Hi, First of all, this library is really cool! Thanks for putting all of this together! This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ): > 1. Why do we need LinCE? >LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details). >Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark. The data comes from social media and here's the summary table of tasks per language pair: | Language Pairs | LID | POS | NER | SA | |----------------------------------------|-----|-----|-----|----| | Spanish-English | ✅ | ✅ | ✅ | ✅ | | Hindi-English | ✅ | ✅ | ✅ | | | Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | | | Nepali-English | ✅ | | | | The tasks are as follows: * LID: token-level language identification * POS: part-of-speech tagging * NER: named entity recognition * SA: sentiment analysis With the exception of MSA-EA, the rest of the datasets contain token-level LID labels. ## Usage For Spanish-English LID, we can load the data as follows: ``` import nlp data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng') for split in data: print(data[split]) ``` Here's the output: ``` Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030) Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332) Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289) ``` Here's the list of shortcut names for every dataset available in LinCE: * `lid_spaeng` * `lid_hineng` * `lid_nepeng` * `lid_msaea` * `pos_spaeng` * `pos_hineng` * `ner_spaeng` * `ner_hineng` * `ner_msaea` * `sa_spaeng` All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script. ## Features Here is how the features look in the case of language identification (LID) tasks: | LID Feature | Type | Description | |----------------------|---------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | For part-of-speech (POS) tagging: | POS Feature | Type | Description | |----------------------|---------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | | `pos` | `list<str>` | List of POS tags (string) of a sentence | For named entity recognition (NER): | NER Feature | Type | Description | |----------------------|---------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | | `ner` | `list<str>` | List of NER labels (string) of a sentence | **NOTE**: the MSA-EA NER dataset does not contain the `lid` feature. For sentiment analysis (SA): | SA Feature | Type | Description | |---------------------|-------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | | `sa` | `str` | Sentiment label (string) of a sentence |
{ "avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4", "events_url": "https://api.github.com/users/gaguilar/events{/privacy}", "followers_url": "https://api.github.com/users/gaguilar/followers", "following_url": "https://api.github.com/users/gaguilar/following{/other_user}", "gists_url": "https://api.github.com/users/gaguilar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gaguilar", "id": 5833357, "login": "gaguilar", "node_id": "MDQ6VXNlcjU4MzMzNTc=", "organizations_url": "https://api.github.com/users/gaguilar/orgs", "received_events_url": "https://api.github.com/users/gaguilar/received_events", "repos_url": "https://api.github.com/users/gaguilar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gaguilar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gaguilar/subscriptions", "type": "User", "url": "https://api.github.com/users/gaguilar" }
https://api.github.com/repos/huggingface/datasets/issues/383/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/383/timeline
closed
false
383
null
2020-07-16T16:19:46Z
null
true
655,290,482
https://api.github.com/repos/huggingface/datasets/issues/382
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/382/events
[]
null
2020-07-11T22:49:38Z
[]
https://github.com/huggingface/datasets/issues/382
NONE
completed
null
null
[]
1080
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/382/reactions" }
MDU6SXNzdWU2NTUyOTA0ODI=
null
2020-07-11T22:29:07Z
https://api.github.com/repos/huggingface/datasets/issues/382/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/60942503?v=4", "events_url": "https://api.github.com/users/saq194/events{/privacy}", "followers_url": "https://api.github.com/users/saq194/followers", "following_url": "https://api.github.com/users/saq194/following{/other_user}", "gists_url": "https://api.github.com/users/saq194/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/saq194", "id": 60942503, "login": "saq194", "node_id": "MDQ6VXNlcjYwOTQyNTAz", "organizations_url": "https://api.github.com/users/saq194/orgs", "received_events_url": "https://api.github.com/users/saq194/received_events", "repos_url": "https://api.github.com/users/saq194/repos", "site_admin": false, "starred_url": "https://api.github.com/users/saq194/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saq194/subscriptions", "type": "User", "url": "https://api.github.com/users/saq194" }
https://api.github.com/repos/huggingface/datasets/issues/382/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/382/timeline
closed
false
382
null
2020-07-11T22:49:38Z
null
false
655,277,119
https://api.github.com/repos/huggingface/datasets/issues/381
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/381/events
[]
null
2020-07-11T20:50:39Z
[]
https://github.com/huggingface/datasets/issues/381
NONE
completed
null
null
[]
NLp
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/381/reactions" }
MDU6SXNzdWU2NTUyNzcxMTk=
null
2020-07-11T20:50:14Z
https://api.github.com/repos/huggingface/datasets/issues/381/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/68147610?v=4", "events_url": "https://api.github.com/users/Spartanthor/events{/privacy}", "followers_url": "https://api.github.com/users/Spartanthor/followers", "following_url": "https://api.github.com/users/Spartanthor/following{/other_user}", "gists_url": "https://api.github.com/users/Spartanthor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Spartanthor", "id": 68147610, "login": "Spartanthor", "node_id": "MDQ6VXNlcjY4MTQ3NjEw", "organizations_url": "https://api.github.com/users/Spartanthor/orgs", "received_events_url": "https://api.github.com/users/Spartanthor/received_events", "repos_url": "https://api.github.com/users/Spartanthor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Spartanthor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Spartanthor/subscriptions", "type": "User", "url": "https://api.github.com/users/Spartanthor" }
https://api.github.com/repos/huggingface/datasets/issues/381/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/381/timeline
closed
false
381
null
2020-07-11T20:50:39Z
null
false
655,226,316
https://api.github.com/repos/huggingface/datasets/issues/378
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/378/events
[]
null
2020-07-15T16:17:20Z
[]
https://github.com/huggingface/datasets/issues/378
MEMBER
completed
null
null
[]
[dataset] Structure of MLQA seems unecessary nested
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/378/reactions" }
MDU6SXNzdWU2NTUyMjYzMTY=
null
2020-07-11T15:16:08Z
https://api.github.com/repos/huggingface/datasets/issues/378/comments
The features of the MLQA dataset comprise several nested dictionaries with a single element inside (for `questions` and `ids`): https://github.com/huggingface/nlp/blob/master/datasets/mlqa/mlqa.py#L90-L97 Should we keep this @mariamabarham @patrickvonplaten? Was this added for compatibility with tfds? ```python features=nlp.Features( { "context": nlp.Value("string"), "questions": nlp.features.Sequence({"question": nlp.Value("string")}), "answers": nlp.features.Sequence( {"text": nlp.Value("string"), "answer_start": nlp.Value("int32"),} ), "ids": nlp.features.Sequence({"idx": nlp.Value("string")}) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
https://api.github.com/repos/huggingface/datasets/issues/378/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/378/timeline
closed
false
378
null
2020-07-15T16:17:20Z
null
false
655,215,790
https://api.github.com/repos/huggingface/datasets/issues/377
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/377/events
[]
null
2020-07-11T14:30:51Z
[]
https://github.com/huggingface/datasets/issues/377
NONE
completed
null
null
[]
Iyy!!!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/377/reactions" }
MDU6SXNzdWU2NTUyMTU3OTA=
null
2020-07-11T14:11:07Z
https://api.github.com/repos/huggingface/datasets/issues/377/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/68154535?v=4", "events_url": "https://api.github.com/users/ajinomoh/events{/privacy}", "followers_url": "https://api.github.com/users/ajinomoh/followers", "following_url": "https://api.github.com/users/ajinomoh/following{/other_user}", "gists_url": "https://api.github.com/users/ajinomoh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ajinomoh", "id": 68154535, "login": "ajinomoh", "node_id": "MDQ6VXNlcjY4MTU0NTM1", "organizations_url": "https://api.github.com/users/ajinomoh/orgs", "received_events_url": "https://api.github.com/users/ajinomoh/received_events", "repos_url": "https://api.github.com/users/ajinomoh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ajinomoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ajinomoh/subscriptions", "type": "User", "url": "https://api.github.com/users/ajinomoh" }
https://api.github.com/repos/huggingface/datasets/issues/377/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/377/timeline
closed
false
377
null
2020-07-11T14:30:51Z
null
false
655,047,826
https://api.github.com/repos/huggingface/datasets/issues/376
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/376/events
[]
null
2022-10-04T18:05:39Z
[]
https://github.com/huggingface/datasets/issues/376
MEMBER
completed
null
null
[]
to_pandas conversion doesn't always work
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/376/reactions" }
MDU6SXNzdWU2NTUwNDc4MjY=
null
2020-07-10T21:33:31Z
https://api.github.com/repos/huggingface/datasets/issues/376/comments
For some complex nested types, the conversion from Arrow to python dict through pandas doesn't seem to be possible. Here is an example using the official SQUAD v2 JSON file. This example was found while investigating #373. ```python >>> squad = load_dataset('json', data_files={nlp.Split.TRAIN: ["./train-v2.0.json"]}, download_mode=nlp.GenerateMode.FORCE_REDOWNLOAD, version="1.0.0", field='data') >>> squad['train'] Dataset(schema: {'title': 'string', 'paragraphs': 'list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>'}, num_rows: 442) >>> squad['train'][0] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/thomwolf/Documents/GitHub/datasets/src/nlp/arrow_dataset.py", line 589, in __getitem__ format_kwargs=self._format_kwargs, File "/Users/thomwolf/Documents/GitHub/datasets/src/nlp/arrow_dataset.py", line 529, in _getitem outputs = self._unnest(self._data.slice(key, 1).to_pandas().to_dict("list")) File "pyarrow/array.pxi", line 559, in pyarrow.lib._PandasConvertible.to_pandas File "pyarrow/table.pxi", line 1367, in pyarrow.lib.Table._to_pandas File "/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 766, in table_to_blockmanager blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes) File "/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 1101, in _table_to_blocks list(extension_columns.keys())) File "pyarrow/table.pxi", line 881, in pyarrow.lib.table_to_blocks File "pyarrow/error.pxi", line 105, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Not implemented type for Arrow list to pandas: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string> ``` cc @lhoestq would we have a way to detect this from the schema maybe? Here is the schema for this pretty complex JSON: ```python >>> squad['train'].schema title: string paragraphs: list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>> child 0, item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string> child 0, qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>> child 0, item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>> child 0, question: string child 1, id: string child 2, answers: list<item: struct<text: string, answer_start: int64>> child 0, item: struct<text: string, answer_start: int64> child 0, text: string child 1, answer_start: int64 child 3, is_impossible: bool child 4, plausible_answers: list<item: struct<text: string, answer_start: int64>> child 0, item: struct<text: string, answer_start: int64> child 0, text: string child 1, answer_start: int64 child 1, context: string ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
https://api.github.com/repos/huggingface/datasets/issues/376/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/376/timeline
closed
false
376
null
2022-10-04T18:05:39Z
null
false
655,023,307
https://api.github.com/repos/huggingface/datasets/issues/375
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/375/events
[]
null
2022-06-01T15:15:59Z
[]
https://github.com/huggingface/datasets/issues/375
NONE
completed
null
null
[]
TypeError when computing bertscore
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/375/reactions" }
MDU6SXNzdWU2NTUwMjMzMDc=
null
2020-07-10T20:37:44Z
https://api.github.com/repos/huggingface/datasets/issues/375/comments
Hi, I installed nlp 0.3.0 via pip, and my python version is 3.7. When I tried to compute bertscore with the code: ``` import nlp bertscore = nlp.load_metric('bertscore') # load hyps and refs ... print (bertscore.compute(hyps, refs, lang='en')) ``` I got the following error. ``` Traceback (most recent call last): File "bert_score_evaluate.py", line 16, in <module> print (bertscore.compute(hyps, refs, lang='en')) File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metric.py", line 200, in compute output = self._compute(predictions=predictions, references=references, **metrics_kwargs) File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metrics/bertscore/fb176889831bf0ce995ed197edc94b2e9a83f647a869bb8c9477dbb2d04d0f08/bertscore.py", line 105, in _compute hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline) TypeError: get_hash() takes 3 positional arguments but 4 were given ``` It seems like there is something wrong with get_hash() function?
{ "avatar_url": "https://avatars.githubusercontent.com/u/13269577?v=4", "events_url": "https://api.github.com/users/willywsm1013/events{/privacy}", "followers_url": "https://api.github.com/users/willywsm1013/followers", "following_url": "https://api.github.com/users/willywsm1013/following{/other_user}", "gists_url": "https://api.github.com/users/willywsm1013/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/willywsm1013", "id": 13269577, "login": "willywsm1013", "node_id": "MDQ6VXNlcjEzMjY5NTc3", "organizations_url": "https://api.github.com/users/willywsm1013/orgs", "received_events_url": "https://api.github.com/users/willywsm1013/received_events", "repos_url": "https://api.github.com/users/willywsm1013/repos", "site_admin": false, "starred_url": "https://api.github.com/users/willywsm1013/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/willywsm1013/subscriptions", "type": "User", "url": "https://api.github.com/users/willywsm1013" }
https://api.github.com/repos/huggingface/datasets/issues/375/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/375/timeline
closed
false
375
null
2022-06-01T15:15:59Z
null
false
654,895,066
https://api.github.com/repos/huggingface/datasets/issues/374
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/374/events
[]
null
2020-07-13T13:44:03Z
[]
https://github.com/huggingface/datasets/pull/374
MEMBER
null
false
null
[]
Add dataset post processing for faiss indexes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/374/reactions" }
MDExOlB1bGxSZXF1ZXN0NDQ3NTMxMzUy
{ "diff_url": "https://github.com/huggingface/datasets/pull/374.diff", "html_url": "https://github.com/huggingface/datasets/pull/374", "merged_at": "2020-07-13T13:44:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/374.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/374" }
2020-07-10T16:25:59Z
https://api.github.com/repos/huggingface/datasets/issues/374/comments
# Post processing of datasets for faiss indexes Now that we can have datasets with embeddings (see `wiki_pr` for example), we can allow users to load the dataset + get the Faiss index that comes with it to do nearest neighbors queries. ## Implementation proposition - Faiss indexes have to be added to the `nlp.Dataset` object, and therefore it's in a different scope that what are doing the `_split_generators` and `_generate_examples` methods of `nlp.DatasetBuilder`. Therefore I added a new method for post processing of the `nlp.Dataset` object called `_post_process` (name could change) - The role of `_post_process` is to apply dataset transforms (filter/map etc.) or indexing functions (add_faiss_index) to modify/enrich the `nlp.Dataset` object. It is not part of the `download_and_prepare` process (that is focused on arrow files creation) so the post processing is run inside the `as_dataset` method. - `_post_process` can generate new files (cached files from dataset transforms or serialized faiss indexes) and their names are defined by `_post_processing_resources` - as we know what are the post processing resources, we can download them automatically from google storage instead of computing them if they're available (as we do for arrow files) I'd happy to discuss these choices ! ## The `wiki_dpr` index It takes 1h20 and ~7GB of memory to compute. The final index is 1.42GB and takes ~1.5GB of memory. This is pretty cool given that a naive flat index would take 170GB of memory to store the 21M vectors of dim 768. I couldn't use directly the Faiss `index_factory` as I needed to set the metric to inner product. ## Example of usage ```python import nlp dset = nlp.load_dataset( "wiki_dpr", "psgs_w100_with_nq_embeddings", split="train", with_index=True ) print(len(dset), dset.list_indexes()) # (21015300, ['embeddings']) ``` (it also works with the dataset configuration without the embeddings because I added the index file in google storage for this one too) ## Demo You can also check a demo on google colab that shows how to use it with the DPRQuestionEncoder from transformers: https://colab.research.google.com/drive/1FakNU8W5EPMcWff7iP1H6REg3XSS0YLp?usp=sharing
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/374/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/374/timeline
closed
false
374
null
2020-07-13T13:44:01Z
null
true
654,845,133
https://api.github.com/repos/huggingface/datasets/issues/373
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/373/events
[]
null
2022-10-04T18:05:47Z
[]
https://github.com/huggingface/datasets/issues/373
CONTRIBUTOR
completed
null
null
[]
Segmentation fault when loading local JSON dataset as of #372
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/373/reactions" }
MDU6SXNzdWU2NTQ4NDUxMzM=
null
2020-07-10T15:04:25Z
https://api.github.com/repos/huggingface/datasets/issues/373/comments
The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data') ``` causes ``` Using custom data configuration default Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/XXX/.cache/huggingface/datasets/json/default/0.0.0... 0 tables [00:00, ? tables/s]Segmentation fault (core dumped) ``` where `./datasets/train-v2.0.json` is downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/. This is consistent with other SQuAD-formatted JSON files. When attempting to load the dataset again, I get the following: ``` Using custom data configuration default Traceback (most recent call last): File "dataloader.py", line 6, in <module> 'json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data') File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset save_infos=save_infos, File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 382, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/home/XXX/.conda/envs/torch/lib/python3.7/contextlib.py", line 112, in __enter__ return next(self.gen) File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 368, in incomplete_dir os.makedirs(tmp_dir) File "/home/XXX/.conda/envs/torch/lib/python3.7/os.py", line 223, in makedirs mkdir(name, mode) FileExistsError: [Errno 17] File exists: '/home/XXX/.cache/huggingface/datasets/json/default/0.0.0.incomplete' ``` (Not sure if you wanted this in the previous issue #369 or not as it was closed.)
{ "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "events_url": "https://api.github.com/users/vegarab/events{/privacy}", "followers_url": "https://api.github.com/users/vegarab/followers", "following_url": "https://api.github.com/users/vegarab/following{/other_user}", "gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vegarab", "id": 24683907, "login": "vegarab", "node_id": "MDQ6VXNlcjI0NjgzOTA3", "organizations_url": "https://api.github.com/users/vegarab/orgs", "received_events_url": "https://api.github.com/users/vegarab/received_events", "repos_url": "https://api.github.com/users/vegarab/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vegarab/subscriptions", "type": "User", "url": "https://api.github.com/users/vegarab" }
https://api.github.com/repos/huggingface/datasets/issues/373/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/373/timeline
closed
false
373
null
2022-10-04T18:05:47Z
null
false
654,774,420
https://api.github.com/repos/huggingface/datasets/issues/372
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/372/events
[]
null
2020-07-10T14:52:07Z
[]
https://github.com/huggingface/datasets/pull/372
MEMBER
null
false
null
[]
Make the json script more flexible
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/372/reactions" }
MDExOlB1bGxSZXF1ZXN0NDQ3NDMzNTA4
{ "diff_url": "https://github.com/huggingface/datasets/pull/372.diff", "html_url": "https://github.com/huggingface/datasets/pull/372", "merged_at": "2020-07-10T14:52:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/372.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/372" }
2020-07-10T13:15:15Z
https://api.github.com/repos/huggingface/datasets/issues/372/comments
Fix https://github.com/huggingface/nlp/issues/359 Fix https://github.com/huggingface/nlp/issues/369 JSON script now can accept JSON files containing a single dict with the records as a list in one attribute to the dict (previously it only accepted JSON files containing records as rows of dicts in the file). In this case, you should indicate using `field=XXX` the name of the field in the JSON structure which contains the records you want to load. The records can be a dict of lists or a list of dicts. E.g. to load the SQuAD dataset JSON (without using the `squad` specific dataset loading script), in which the data rows are in the `data` field of the JSON dict, you can do: ```python from nlp import load_dataset dataset = load_dataset('json', data_files='/PATH/TO/JSON', field='data') ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
https://api.github.com/repos/huggingface/datasets/issues/372/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/372/timeline
closed
false
372
null
2020-07-10T14:52:06Z
null
true
654,668,242
https://api.github.com/repos/huggingface/datasets/issues/371
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/371/events
[]
null
2020-07-10T13:45:22Z
[]
https://github.com/huggingface/datasets/pull/371
MEMBER
null
false
null
[]
Fix cached file path for metrics with different config names
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/371/reactions" }
MDExOlB1bGxSZXF1ZXN0NDQ3MzQ4NDgw
{ "diff_url": "https://github.com/huggingface/datasets/pull/371.diff", "html_url": "https://github.com/huggingface/datasets/pull/371", "merged_at": "2020-07-10T13:45:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/371.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/371" }
2020-07-10T10:02:24Z
https://api.github.com/repos/huggingface/datasets/issues/371/comments
The config name was not taken into account to build the cached file path. It should fix #368
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/371/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/371/timeline
closed
false
371
null
2020-07-10T13:45:20Z
null
true
654,304,193
https://api.github.com/repos/huggingface/datasets/issues/370
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/370/events
[]
null
2020-07-10T14:05:44Z
[]
https://github.com/huggingface/datasets/pull/370
CONTRIBUTOR
null
false
null
[]
Allow indexing Dataset via np.ndarray
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/370/reactions" }
MDExOlB1bGxSZXF1ZXN0NDQ3MDU3NTIw
{ "diff_url": "https://github.com/huggingface/datasets/pull/370.diff", "html_url": "https://github.com/huggingface/datasets/pull/370", "merged_at": "2020-07-10T14:05:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/370.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/370" }
2020-07-09T19:43:15Z
https://api.github.com/repos/huggingface/datasets/issues/370/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen" }
https://api.github.com/repos/huggingface/datasets/issues/370/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/370/timeline
closed
false
370
null
2020-07-10T14:05:43Z
null
true
654,186,890
https://api.github.com/repos/huggingface/datasets/issues/369
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/369/events
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
null
2020-12-15T23:07:22Z
[]
https://github.com/huggingface/datasets/issues/369
CONTRIBUTOR
completed
null
null
[]
can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/369/reactions" }
MDU6SXNzdWU2NTQxODY4OTA=
null
2020-07-09T16:16:53Z
https://api.github.com/repos/huggingface/datasets/issues/369/comments
Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB): ``` dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]}) ``` causes ``` Traceback (most recent call last): File "dataloader.py", line 9, in <module> ["./path/to/file.json"]}) File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset save_infos=save_infos, File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 483, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 719, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False): File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__ for obj in iterable: File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/datasets/json/88c1bc5c68489f7eda549ed05a5a738527c613b3e7a4ee3524d9d233353a949b/json.py", line 53, in _generate_tables file, read_options=self.config.pa_read_options, parse_options=self.config.pa_parse_options, File "pyarrow/_json.pyx", line 191, in pyarrow._json.read_json File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ``` I haven't been able to find any reports of this specific pyarrow error here or elsewhere.
{ "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "events_url": "https://api.github.com/users/vegarab/events{/privacy}", "followers_url": "https://api.github.com/users/vegarab/followers", "following_url": "https://api.github.com/users/vegarab/following{/other_user}", "gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vegarab", "id": 24683907, "login": "vegarab", "node_id": "MDQ6VXNlcjI0NjgzOTA3", "organizations_url": "https://api.github.com/users/vegarab/orgs", "received_events_url": "https://api.github.com/users/vegarab/received_events", "repos_url": "https://api.github.com/users/vegarab/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vegarab/subscriptions", "type": "User", "url": "https://api.github.com/users/vegarab" }
https://api.github.com/repos/huggingface/datasets/issues/369/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/369/timeline
closed
false
369
null
2020-07-10T14:52:06Z
null
false
654,087,251
https://api.github.com/repos/huggingface/datasets/issues/368
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/368/events
[]
null
2020-07-10T13:45:20Z
[]
https://github.com/huggingface/datasets/issues/368
NONE
completed
null
null
[]
load_metric can't acquire lock anymore
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/368/reactions" }
MDU6SXNzdWU2NTQwODcyNTE=
null
2020-07-09T14:04:09Z
https://api.github.com/repos/huggingface/datasets/issues/368/comments
I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `/home/XXX/.cache/huggingface/`, and the issue persisted. What are the steps to fix this? Traceback (most recent call last): File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 101, in __init__ self.filelock.acquire(timeout=1) File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/filelock.py", line 278, in acquire raise Timeout(self._lock_file) filelock.Timeout: The file lock '/home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock' could not be acquired. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "examples_huggingface_nlp.py", line 268, in <module> main() File "examples_huggingface_nlp.py", line 242, in main dataset, metric = get_dataset_metric(glue_task) File "examples_huggingface_nlp.py", line 77, in get_dataset_metric metric = nlp.load_metric('glue', glue_config, experiment_id=1) File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/load.py", line 440, in load_metric **metric_init_kwargs, File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 104, in __init__ "Cannot acquire lock, caching file might be used by another process, " ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id' for this run. I0709 15:54:41.008838 139854118430464 filelock.py:318] Lock 139852058030936 released on /home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock
{ "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ydshieh", "id": 2521628, "login": "ydshieh", "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "repos_url": "https://api.github.com/users/ydshieh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "type": "User", "url": "https://api.github.com/users/ydshieh" }
https://api.github.com/repos/huggingface/datasets/issues/368/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/368/timeline
closed
false
368
null
2020-07-10T13:45:20Z
null
false