url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.94B
| node_id
stringlengths 18
32
| number
int64 1
6.3k
| title
stringlengths 1
290
| user
stringlengths 870
1.16k
| labels
stringclasses 78
values | state
stringclasses 2
values | locked
bool 1
class | assignee
stringclasses 66
values | assignees
stringclasses 78
values | milestone
stringclasses 8
values | comments
stringlengths 2
193k
| created_at
stringlengths 25
25
| updated_at
stringlengths 25
25
| closed_at
stringlengths 25
25
⌀ | author_association
stringclasses 3
values | active_lock_reason
float64 | body
stringlengths 1
228k
⌀ | reactions
stringlengths 191
197
| timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | draft
float64 0
1
⌀ | pull_request
stringlengths 289
315
⌀ | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/584 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/584/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/584/comments | https://api.github.com/repos/huggingface/datasets/issues/584/events | https://github.com/huggingface/datasets/pull/584 | 695,186,652 | MDExOlB1bGxSZXF1ZXN0NDgxNDY0NjEz | 584 | Use github versioning | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | ['I noticed that datasets like `cnn_dailymail` need the `version` parameter to be passed to its `config_kwargs`.\r\nShall we rename the `version` paramater in `load_dataset` ? Maybe `repo_version` or `script_version` ?'] | 2020-09-07 14:58:15+00:00 | 2020-09-09 13:37:35+00:00 | 2020-09-09 13:37:34+00:00 | MEMBER | null | Right now dataset scripts and metrics are downloaded from S3 which is in sync with master. It means that it's not currently possible to pin the dataset/metric script version.
To fix that I changed the download url from S3 to github, and adding a `version` parameter in `load_dataset` and `load_metric` to pin a certain version of the lib, as in #562 | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/584/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/584/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/584.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/584', 'merged_at': '2020-09-09T13:37:34Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/584.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/584'} | true |
https://api.github.com/repos/huggingface/datasets/issues/583 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/583/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/583/comments | https://api.github.com/repos/huggingface/datasets/issues/583/events | https://github.com/huggingface/datasets/issues/583 | 695,166,265 | MDU6SXNzdWU2OTUxNjYyNjU= | 583 | ArrowIndexError on Dataset.select | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | [] | 2020-09-07 14:36:29+00:00 | 2020-09-08 07:43:15+00:00 | 2020-09-08 07:43:15+00:00 | MEMBER | null | If the indices table consists in several chunks, then `dataset.select` results in an `ArrowIndexError` error for pyarrow < 1.0.0
Example:
```python
from nlp import load_dataset
mnli = load_dataset("glue", "mnli", split="train")
shuffled = mnli.shuffle(seed=42)
mnli.select(list(range(len(mnli))))
```
raises:
```python
---------------------------------------------------------------------------
ArrowIndexError Traceback (most recent call last)
<ipython-input-64-006a5d38d418> in <module>
----> 1 mnli.shuffle(seed=42).select(list(range(len(mnli))))
~/Desktop/hf/nlp/src/nlp/fingerprint.py in wrapper(*args, **kwargs)
161 # Call actual function
162
--> 163 out = func(self, *args, **kwargs)
164
165 # Update fingerprint of in-place transforms + update in-place history of transforms
~/Desktop/hf/nlp/src/nlp/arrow_dataset.py in select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint)
1653 if self._indices is not None:
1654 if PYARROW_V0:
-> 1655 indices_array = self._indices.column(0).chunk(0).take(indices_array)
1656 else:
1657 indices_array = self._indices.column(0).take(indices_array)
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.Array.take()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowIndexError: take index out of bounds
```
This is because the `take` method is only done on the first chunk which only contains 1000 elements by default (mnli has ~400 000 elements).
Shall we change that to use
```python
pa.concat_tables(self._indices._indices.slice(i, 1) for i in indices_array)
```
instead of `take` ? @thomwolf | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/583/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/583/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/582 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/582/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/582/comments | https://api.github.com/repos/huggingface/datasets/issues/582/events | https://github.com/huggingface/datasets/issues/582 | 695,126,456 | MDU6SXNzdWU2OTUxMjY0NTY= | 582 | Allow for PathLike objects | {'avatar_url': 'https://avatars.githubusercontent.com/u/2779410?v=4', 'events_url': 'https://api.github.com/users/BramVanroy/events{/privacy}', 'followers_url': 'https://api.github.com/users/BramVanroy/followers', 'following_url': 'https://api.github.com/users/BramVanroy/following{/other_user}', 'gists_url': 'https://api.github.com/users/BramVanroy/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/BramVanroy', 'id': 2779410, 'login': 'BramVanroy', 'node_id': 'MDQ6VXNlcjI3Nzk0MTA=', 'organizations_url': 'https://api.github.com/users/BramVanroy/orgs', 'received_events_url': 'https://api.github.com/users/BramVanroy/received_events', 'repos_url': 'https://api.github.com/users/BramVanroy/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/BramVanroy/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/BramVanroy/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/BramVanroy'} | [] | closed | false | null | [] | null | [] | 2020-09-07 13:54:51+00:00 | 2020-09-08 07:45:17+00:00 | 2020-09-08 07:45:17+00:00 | CONTRIBUTOR | null | Using PathLike objects as input for `load_dataset` does not seem to work. The following will throw an error.
```python
files = list(Path(r"D:\corpora\yourcorpus").glob("*.txt"))
dataset = load_dataset("text", data_files=files)
```
Traceback:
```
Traceback (most recent call last):
File "C:/dev/python/dutch-simplification/main.py", line 7, in <module>
dataset = load_dataset("text", data_files=files)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 470, in download_and_prepare
self._save_info()
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 564, in _save_info
self.info.write_to_directory(self._cache_dir)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\info.py", line 149, in write_to_directory
self._dump_info(f)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\info.py", line 156, in _dump_info
file.write(json.dumps(asdict(self)).encode("utf-8"))
File "c:\users\bramv\appdata\local\programs\python\python38\lib\json\__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "c:\users\bramv\appdata\local\programs\python\python38\lib\json\encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "c:\users\bramv\appdata\local\programs\python\python38\lib\json\encoder.py", line 257, in iterencode
return _iterencode(o, 0)
TypeError: keys must be str, int, float, bool or None, not WindowsPath
```
We have to cast to a string explicitly to make this work. It would be nicer if we could actually use PathLike objects.
```python
files = [str(f) for f in Path(r"D:\corpora\wablieft").glob("*.txt")]
```
| {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/582/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/582/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/581 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/581/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/581/comments | https://api.github.com/repos/huggingface/datasets/issues/581/events | https://github.com/huggingface/datasets/issues/581 | 695,120,517 | MDU6SXNzdWU2OTUxMjA1MTc= | 581 | Better error message when input file does not exist | {'avatar_url': 'https://avatars.githubusercontent.com/u/2779410?v=4', 'events_url': 'https://api.github.com/users/BramVanroy/events{/privacy}', 'followers_url': 'https://api.github.com/users/BramVanroy/followers', 'following_url': 'https://api.github.com/users/BramVanroy/following{/other_user}', 'gists_url': 'https://api.github.com/users/BramVanroy/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/BramVanroy', 'id': 2779410, 'login': 'BramVanroy', 'node_id': 'MDQ6VXNlcjI3Nzk0MTA=', 'organizations_url': 'https://api.github.com/users/BramVanroy/orgs', 'received_events_url': 'https://api.github.com/users/BramVanroy/received_events', 'repos_url': 'https://api.github.com/users/BramVanroy/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/BramVanroy/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/BramVanroy/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/BramVanroy'} | [] | closed | false | null | [] | null | [] | 2020-09-07 13:47:59+00:00 | 2020-09-09 09:00:07+00:00 | 2020-09-09 09:00:07+00:00 | CONTRIBUTOR | null | In the following scenario, when `data_files` is an empty list, the stack trace and error message could be improved. This can probably be solved by checking for each file whether it actually exists and/or whether the argument is not false-y.
```python
dataset = load_dataset("text", data_files=[])
```
Example error trace.
```
Using custom data configuration default
Downloading and preparing dataset text/default-d18f9b6611eb8e16 (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to C:\Users\bramv\.cache\huggingface\datasets\text\default-d18f9b6611eb8e16\0.0.0\3a79870d85f1982d6a2af884fde86a71c771747b4b161fd302d28ad22adf985b...
Traceback (most recent call last):
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 424, in incomplete_dir
yield tmp_dir
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 462, in download_and_prepare
self._download_and_prepare(
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 537, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 813, in _prepare_split
num_examples, num_bytes = writer.finalize()
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\arrow_writer.py", line 217, in finalize
self.pa_writer.close()
AttributeError: 'NoneType' object has no attribute 'close'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/dev/python/dutch-simplification/main.py", line 7, in <module>
dataset = load_dataset("text", data_files=files)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 470, in download_and_prepare
self._save_info()
File "c:\users\bramv\appdata\local\programs\python\python38\lib\contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 430, in incomplete_dir
shutil.rmtree(tmp_dir)
File "c:\users\bramv\appdata\local\programs\python\python38\lib\shutil.py", line 737, in rmtree
return _rmtree_unsafe(path, onerror)
File "c:\users\bramv\appdata\local\programs\python\python38\lib\shutil.py", line 615, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "c:\users\bramv\appdata\local\programs\python\python38\lib\shutil.py", line 613, in _rmtree_unsafe
os.unlink(fullname)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\bramv\\.cache\\huggingface\\datasets\\text\\default-d18f9b6611eb8e16\\0.0.0\\3a79870d85f1982d6a2af884fde86a71c771747b4b161fd302d28ad22adf985b.incomplete\\text-train.arrow'
``` | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/581/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/581/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/580/comments | https://api.github.com/repos/huggingface/datasets/issues/580/events | https://github.com/huggingface/datasets/issues/580 | 694,954,551 | MDU6SXNzdWU2OTQ5NTQ1NTE= | 580 | nlp re-creates already-there caches when using a script, but not within a shell | {'avatar_url': 'https://avatars.githubusercontent.com/u/26709476?v=4', 'events_url': 'https://api.github.com/users/TevenLeScao/events{/privacy}', 'followers_url': 'https://api.github.com/users/TevenLeScao/followers', 'following_url': 'https://api.github.com/users/TevenLeScao/following{/other_user}', 'gists_url': 'https://api.github.com/users/TevenLeScao/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/TevenLeScao', 'id': 26709476, 'login': 'TevenLeScao', 'node_id': 'MDQ6VXNlcjI2NzA5NDc2', 'organizations_url': 'https://api.github.com/users/TevenLeScao/orgs', 'received_events_url': 'https://api.github.com/users/TevenLeScao/received_events', 'repos_url': 'https://api.github.com/users/TevenLeScao/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/TevenLeScao/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/TevenLeScao'} | [] | closed | false | null | [] | null | ["Couln't reproduce on my side :/ \r\nlet me know if you manage to reproduce on another env (colab for example)"
'Fixed with a clean re-install!'] | 2020-09-07 10:23:50+00:00 | 2020-09-07 15:19:09+00:00 | 2020-09-07 14:26:41+00:00 | CONTRIBUTOR | null | `nlp` keeps creating new caches for the same file when launching `filter` from a script, and behaves correctly from within the shell.
Example: try running
```
import nlp
hans_easy_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 0)
hans_hard_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 1)
```
twice. If launched from a `file.py` script, the cache will be re-created the second time. If launched as 3 shell/`ipython` commands, `nlp` will correctly re-use the cache.
As observed with @lhoestq. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/580/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/580/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/579 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/579/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/579/comments | https://api.github.com/repos/huggingface/datasets/issues/579/events | https://github.com/huggingface/datasets/pull/579 | 694,947,599 | MDExOlB1bGxSZXF1ZXN0NDgxMjU1OTI5 | 579 | Doc metrics | {'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/thomwolf', 'id': 7353373, 'login': 'thomwolf', 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/thomwolf'} | [] | closed | false | null | [] | null | [] | 2020-09-07 10:15:24+00:00 | 2020-09-10 13:06:11+00:00 | 2020-09-10 13:06:10+00:00 | MEMBER | null | Adding documentation on metrics loading/using/sharing | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/579/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/579/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/579.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/579', 'merged_at': '2020-09-10T13:06:10Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/579.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/579'} | true |
https://api.github.com/repos/huggingface/datasets/issues/578 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/578/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/578/comments | https://api.github.com/repos/huggingface/datasets/issues/578/events | https://github.com/huggingface/datasets/pull/578 | 694,849,940 | MDExOlB1bGxSZXF1ZXN0NDgxMTczNDE0 | 578 | Add CommonGen Dataset | {'avatar_url': 'https://avatars.githubusercontent.com/u/22514219?v=4', 'events_url': 'https://api.github.com/users/JetRunner/events{/privacy}', 'followers_url': 'https://api.github.com/users/JetRunner/followers', 'following_url': 'https://api.github.com/users/JetRunner/following{/other_user}', 'gists_url': 'https://api.github.com/users/JetRunner/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/JetRunner', 'id': 22514219, 'login': 'JetRunner', 'node_id': 'MDQ6VXNlcjIyNTE0MjE5', 'organizations_url': 'https://api.github.com/users/JetRunner/orgs', 'received_events_url': 'https://api.github.com/users/JetRunner/received_events', 'repos_url': 'https://api.github.com/users/JetRunner/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/JetRunner/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/JetRunner/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/JetRunner'} | [] | closed | false | null | [] | null | [] | 2020-09-07 08:17:17+00:00 | 2020-09-07 11:50:29+00:00 | 2020-09-07 11:49:07+00:00 | CONTRIBUTOR | null | CC Authors:
@yuchenlin @MichaelZhouwang | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/578/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/578/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/578.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/578', 'merged_at': '2020-09-07T11:49:07Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/578.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/578'} | true |
https://api.github.com/repos/huggingface/datasets/issues/577 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/577/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/577/comments | https://api.github.com/repos/huggingface/datasets/issues/577/events | https://github.com/huggingface/datasets/issues/577 | 694,607,148 | MDU6SXNzdWU2OTQ2MDcxNDg= | 577 | Some languages in wikipedia dataset are not loading | {'avatar_url': 'https://avatars.githubusercontent.com/u/5833357?v=4', 'events_url': 'https://api.github.com/users/gaguilar/events{/privacy}', 'followers_url': 'https://api.github.com/users/gaguilar/followers', 'following_url': 'https://api.github.com/users/gaguilar/following{/other_user}', 'gists_url': 'https://api.github.com/users/gaguilar/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/gaguilar', 'id': 5833357, 'login': 'gaguilar', 'node_id': 'MDQ6VXNlcjU4MzMzNTc=', 'organizations_url': 'https://api.github.com/users/gaguilar/orgs', 'received_events_url': 'https://api.github.com/users/gaguilar/received_events', 'repos_url': 'https://api.github.com/users/gaguilar/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/gaguilar/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/gaguilar/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/gaguilar'} | [] | closed | false | null | [] | null | ['Some wikipedia languages have already been processed by us and are hosted on our google storage. This is the case for "fr" and "en" for example.\r\n\r\nFor other smaller languages (in terms of bytes), they are directly downloaded and parsed from the wikipedia dump site.\r\nParsing can take some time for languages with hundreds of MB of xml.\r\n\r\nLet me know if you encounter an error or if you feel that is is taking too long for you.\r\nWe could process those that really take too much time'
'Ok, thanks for clarifying, that makes sense. I will time those examples later today and post back here.\r\n\r\nAlso, it seems that not all dumps should use the same date. For instance, I was checking the Spanish dump doing the following:\r\n```\r\ndata = nlp.load_dataset(\'wikipedia\', \'20200501.es\', beam_runner=\'DirectRunner\', split=\'train\')\r\n```\r\n\r\nI got the error below because this URL does not exist: https://dumps.wikimedia.org/eswiki/20200501/dumpstatus.json. So I checked the actual available dates here https://dumps.wikimedia.org/eswiki/ and there is no 20200501. If one tries for a date available in the link, then the nlp library does not allow such a request because is not in the list of expected datasets.\r\n\r\n```\r\nDownloading and preparing dataset wikipedia/20200501.es (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/gaguilar/.cache/huggingface/datasets/wikipedia/20200501.es/1.0.0/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50...\r\nTraceback (most recent call last):\r\n File "<stdin>", line 1, in <module>\r\n File "/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/load.py", line 548, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File "/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/builder.py", line 462, in download_and_prepare\r\n self._download_and_prepare(\r\n File "/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/builder.py", line 965, in _download_and_prepare\r\n super(BeamBasedBuilder, self)._download_and_prepare(\r\n File "/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/builder.py", line 518, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File "/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/datasets/wikipedia/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50/wikipedia.py", line 422, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract({"info": info_url})\r\n File "/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/utils/download_manager.py", line 220, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File "/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/utils/download_manager.py", line 155, in download\r\n downloaded_path_or_paths = map_nested(\r\n File "/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/utils/py_utils.py", line 163, in map_nested\r\n return {\r\n File "/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/utils/py_utils.py", line 164, in <dictcomp>\r\n k: map_nested(\r\n File "/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/utils/py_utils.py", line 191, in map_nested\r\n return function(data_struct)\r\n File "/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/utils/download_manager.py", line 156, in <lambda>\r\n lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,\r\n File "/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/utils/file_utils.py", line 191, in cached_path\r\n output_path = get_from_cache(\r\n File "/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/utils/file_utils.py", line 356, in get_from_cache\r\n raise ConnectionError("Couldn\'t reach {}".format(url))\r\nConnectionError: Couldn\'t reach https://dumps.wikimedia.org/eswiki/20200501/dumpstatus.json\r\n```'
'Thanks ! This will be very helpful.\r\n\r\nAbout the date issue, I think it\'s possible to use another date with\r\n\r\n```python\r\nload_dataset("wikipedia", language="es", date="...", beam_runner="...")\r\n```\r\n\r\nHowever we\'ve not processed wikipedia dumps for other dates than 20200501 (yet ?)\r\n\r\nOne more thing that is specific to 20200501.es: it was available once but the `mwparserfromhell` was not able to parse it for some reason, so we didn\'t manage to get a processed version of 20200501.es (see #321 )'
'Cool! Thanks for the trick regarding different dates!\r\n\r\nI checked the download/processing time for retrieving the Arabic Wikipedia dump, and it took about 3.2 hours. I think that this may be a bit impractical when it comes to working with multiple languages (although I understand that storing those datasets in your Google storage may not be very appealing either). \r\n\r\nFor the record, here\'s what I did:\r\n```python\r\nimport nlp\r\nimport time\r\n\r\ndef timeit(filename):\r\n elapsed = time.time()\r\n data = nlp.load_dataset(\'wikipedia\', filename, beam_runner=\'DirectRunner\', split=\'train\')\r\n elapsed = time.time() - elapsed\r\n print(f"Loading the \'{filename}\' data took {elapsed:,.1f} seconds...")\r\n return data\r\n\r\ndata = timeit(\'20200501.ar\')\r\n```\r\n\r\nHere\'s the output:\r\n```\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13.0k/13.0k [00:00<00:00, 8.34MB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 28.7k/28.7k [00:00<00:00, 954kB/s]\r\nDownloading and preparing dataset wikipedia/20200501.ar (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/gaguil20/.cache/huggingface/datasets/wikipedia/20200501.ar/1.0.0/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50...\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47.4k/47.4k [00:00<00:00, 1.40MB/s]\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 79.8M/79.8M [00:15<00:00, 5.13MB/s]\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 171M/171M [00:33<00:00, 5.13MB/s]\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 103M/103M [00:20<00:00, 5.14MB/s]\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 227M/227M [00:44<00:00, 5.06MB/s]\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 140M/140M [00:28<00:00, 4.96MB/s]\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 160M/160M [00:30<00:00, 5.20MB/s]\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97.5M/97.5M [00:19<00:00, 5.06MB/s]\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 222M/222M [00:42<00:00, 5.21MB/s]\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [03:16<00:00, 196.39s/sources]\r\nDataset wikipedia downloaded and prepared to /home/gaguil20/.cache/huggingface/datasets/wikipedia/20200501.ar/1.0.0/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50. Subsequent calls will reuse this data.\r\nLoading the \'20200501.ar\' data took 11,582.7 seconds...\r\n````'
'> About the date issue, I think it\'s possible to use another date with\r\n> ```python\r\n> load_dataset("wikipedia", language="es", date="...", beam_runner="...")\r\n> ```\r\n\r\nI tried your suggestion about the date and the function does not accept the language and date keywords. I tried both on `nlp` v0.4 and the new `datasets` library (v1.0.2):\r\n```\r\nload_dataset("wikipedia", language="es", date="20200601", beam_runner=\'DirectRunner\', split=\'train\')\r\n```\r\nFor now, my quick workaround to keep things moving was to simply change the date inside the library at this line: [https://github.com/huggingface/datasets/blob/master/datasets/wikipedia/wikipedia.py#L403](https://github.com/huggingface/datasets/blob/master/datasets/wikipedia/wikipedia.py#L403)\r\n\r\nNote that the date and languages are valid: [https://dumps.wikimedia.org/eswiki/20200601/dumpstatus.json](https://dumps.wikimedia.org/eswiki/20200601/dumpstatus.json)\r\n\r\nAny suggestion is welcome :) @lhoestq \r\n\r\n\r\n## **[UPDATE]**\r\n\r\nThe workaround I mentioned fetched the data, but then I faced another issue (even the log says to report this as bug):\r\n```\r\nERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.\r\n```\r\n\r\nHere\'s the full stack (which says that there is a key error caused by this key: `KeyError: \'000nbsp\'`):\r\n\r\n```Downloading and preparing dataset wikipedia/20200601.es (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/gustavoag/.cache/huggingface/datasets/wikipedia/20200601.es/1.0.0/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50...\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 74.7k/74.7k [00:00<00:00, 1.53MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 232M/232M [00:48<00:00, 4.75MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 442M/442M [01:39<00:00, 4.44MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 173M/173M [00:33<00:00, 5.12MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 344M/344M [01:14<00:00, 4.59MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 541M/541M [01:59<00:00, 4.52MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 476M/476M [01:31<00:00, 5.18MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 545M/545M [02:02<00:00, 4.46MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 299M/299M [01:01<00:00, 4.89MB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.60M/9.60M [00:01<00:00, 4.84MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 423M/423M [01:36<00:00, 4.38MB/s]\r\nWARNING:apache_beam.options.pipeline_options:Discarding unparseable args: [\'--lang\', \'es\', \'--date\', \'20200601\', \'--tokenizer\', \'bert-base-multilingual-cased\', \'--cache\', \'train\', \'valid\', \'--max_dataset_length\', \'200000\', \'10000\']\r\n\r\nERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.\r\nERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.\r\nERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.\r\nERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.\r\nTraceback (most recent call last):\r\n File "apache_beam/runners/common.py", line 961, in apache_beam.runners.common.DoFnRunner.process\r\n File "apache_beam/runners/common.py", line 553, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File "apache_beam/runners/common.py", line 1095, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/nlp/datasets/wikipedia/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50/wikipedia.py", line 500, in _clean_content\r\n text = _parse_and_clean_wikicode(raw_content, parser=mwparserfromhell)\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/nlp/datasets/wikipedia/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50/wikipedia.py", line 556, in _parse_and_clean_wikicode\r\n section_text.append(section.strip_code().strip())\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/mwparserfromhell/wikicode.py", line 643, in strip_code\r\n stripped = node.__strip__(**kwargs)\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/mwparserfromhell/nodes/html_entity.py", line 63, in __strip__\r\n return self.normalize()\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/mwparserfromhell/nodes/html_entity.py", line 178, in normalize\r\n return chrfunc(htmlentities.name2codepoint[self.value])\r\nKeyError: \'000nbsp\'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/runpy.py", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/runpy.py", line 87, in _run_code\r\n exec(code, run_globals)\r\n File "/raid/data/gustavoag/projects/char2subword/research/preprocessing/split_wiki.py", line 96, in <module>\r\n main()\r\n File "/raid/data/gustavoag/projects/char2subword/research/preprocessing/split_wiki.py", line 65, in main\r\n data = nlp.load_dataset(\'wikipedia\', f\'{args.date}.{args.lang}\', beam_runner=\'DirectRunner\', split=\'train\')\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/nlp/load.py", line 548, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/nlp/builder.py", line 462, in download_and_prepare\r\n self._download_and_prepare(\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/nlp/builder.py", line 969, in _download_and_prepare\r\n pipeline_results = pipeline.run()\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/pipeline.py", line 534, in run\r\n return self.runner.run_pipeline(self, self._options)\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/direct/direct_runner.py", line 119, in run_pipeline\r\n return runner.run_pipeline(pipeline, options)\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 172, in run_pipeline\r\n self._latest_run_result = self.run_via_runner_api(\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 183, in run_via_runner_api\r\n return self.run_stages(stage_context, stages)\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 338, in run_stages\r\n stage_results = self._run_stage(\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 512, in _run_stage\r\n last_result, deferred_inputs, fired_timers = self._run_bundle(\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 556, in _run_bundle\r\n result, splits = bundle_manager.process_bundle(\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 940, in process_bundle\r\n for result, split_result in executor.map(execute, zip(part_inputs, # pylint: disable=zip-builtin-not-iterating\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/concurrent/futures/_base.py", line 611, in result_iterator\r\n yield fs.pop().result()\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/concurrent/futures/_base.py", line 439, in result\r\n return self.__get_result()\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result\r\n raise self._exception\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/utils/thread_pool_executor.py", line 44, in run\r\n self._future.set_result(self._fn(*self._fn_args, **self._fn_kwargs))\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 932, in execute\r\n return bundle_manager.process_bundle(\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 837, in process_bundle\r\n result_future = self._worker_handler.control_conn.push(process_bundle_req)\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/worker_handlers.py", line 352, in push\r\n response = self.worker.do_instruction(request)\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/worker/sdk_worker.py", line 479, in do_instruction\r\n return getattr(self, request_type)(\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/worker/sdk_worker.py", line 515, in process_bundle\r\n bundle_processor.process_bundle(instruction_id))\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/worker/bundle_processor.py", line 977, in process_bundle\r\n input_op_by_transform_id[element.transform_id].process_encoded(\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/worker/bundle_processor.py", line 218, in process_encoded\r\n self.output(decoded_value)\r\n File "apache_beam/runners/worker/operations.py", line 330, in apache_beam.runners.worker.operations.Operation.output\r\n File "apache_beam/runners/worker/operations.py", line 332, in apache_beam.runners.worker.operations.Operation.output\r\n File "apache_beam/runners/worker/operations.py", line 195, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File "apache_beam/runners/worker/operations.py", line 670, in apache_beam.runners.worker.operations.DoOperation.process\r\n File "apache_beam/runners/worker/operations.py", line 671, in apache_beam.runners.worker.operations.DoOperation.process\r\n File "apache_beam/runners/common.py", line 963, in apache_beam.runners.common.DoFnRunner.process\r\n File "apache_beam/runners/common.py", line 1030, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File "apache_beam/runners/common.py", line 961, in apache_beam.runners.common.DoFnRunner.process\r\n File "apache_beam/runners/common.py", line 553, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File "apache_beam/runners/common.py", line 1122, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File "apache_beam/runners/worker/operations.py", line 195, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File "apache_beam/runners/worker/operations.py", line 670, in apache_beam.runners.worker.operations.DoOperation.process\r\n File "apache_beam/runners/worker/operations.py", line 671, in apache_beam.runners.worker.operations.DoOperation.process\r\n File "apache_beam/runners/common.py", line 963, in apache_beam.runners.common.DoFnRunner.process\r\n File "apache_beam/runners/common.py", line 1030, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File "apache_beam/runners/common.py", line 961, in apache_beam.runners.common.DoFnRunner.process\r\n File "apache_beam/runners/common.py", line 553, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File "apache_beam/runners/common.py", line 1122, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File "apache_beam/runners/worker/operations.py", line 195, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File "apache_beam/runners/worker/operations.py", line 670, in apache_beam.runners.worker.operations.DoOperation.process\r\n File "apache_beam/runners/worker/operations.py", line 671, in apache_beam.runners.worker.operations.DoOperation.process\r\n File "apache_beam/runners/common.py", line 963, in apache_beam.runners.common.DoFnRunner.process\r\n File "apache_beam/runners/common.py", line 1045, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/future/utils/__init__.py", line 446, in raise_with_traceback\r\n raise exc.with_traceback(traceback)\r\n File "apache_beam/runners/common.py", line 961, in apache_beam.runners.common.DoFnRunner.process\r\n File "apache_beam/runners/common.py", line 553, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File "apache_beam/runners/common.py", line 1095, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/nlp/datasets/wikipedia/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50/wikipedia.py", line 500, in _clean_content\r\n text = _parse_and_clean_wikicode(raw_content, parser=mwparserfromhell)\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/nlp/datasets/wikipedia/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50/wikipedia.py", line 556, in _parse_and_clean_wikicode\r\n section_text.append(section.strip_code().strip())\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/mwparserfromhell/wikicode.py", line 643, in strip_code\r\n stripped = node.__strip__(**kwargs)\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/mwparserfromhell/nodes/html_entity.py", line 63, in __strip__\r\n return self.normalize()\r\n File "/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/mwparserfromhell/nodes/html_entity.py", line 178, in normalize\r\n return chrfunc(htmlentities.name2codepoint[self.value])\r\nKeyError: "000nbsp [while running \'train/Clean content\']"```'
'@lhoestq Any updates on this? I have similar issues with the Romanian dump, tnx.'
'Hey @gaguilar ,\r\n\r\nI just found the ["char2subword" paper](https://arxiv.org/pdf/2010.12730.pdf) and I\'m really interested in trying it out on own vocabs/datasets like for historical texts (I\'ve already [trained some lms](https://github.com/stefan-it/europeana-bert) on newspaper articles with OCR errors).\r\n\r\nDo you plan to release the code for your paper or is it possible to get the implementation 🤔 Many thanks :hugs: '
'Hi @stefan-it! Thanks for your interest in our work! We do plan to release the code, but we will make it available once the paper has been published at a conference. Sorry for the inconvenience!\r\n\r\nHi @lhoestq, do you have any insights for this issue by any chance? Thanks!'
"This is an issue on the `mwparserfromhell` side. You could try to update `mwparserfromhell` and see if it fixes the issue. If it doesn't we'll have to create an issue on their repo for them to fix it.\r\nBut first let's see if the latest version of `mwparserfromhell` does the job."
'I think the work around as suggested in the issue [#886] is not working for several languages, such as `id`. For example, I tried all the dates to download dataset for `id` langauge from the following link: (https://github.com/huggingface/datasets/pull/886) [https://dumps.wikimedia.org/idwiki/](https://dumps.wikimedia.org/idwiki/ )\r\n\r\n> >>> dataset = load_dataset(\'wikipedia\', language=\'id\', date="20210501", beam_runner=\'DirectRunner\')\r\nWARNING:datasets.builder:Using custom data configuration 20210501.id-date=20210501,language=id\r\nDownloading and preparing dataset wikipedia/20210501.id (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /Users/.cache/huggingface/datasets/wikipedia/20210501.id-date=20210501,language=id/0.0.0/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1...\r\nTraceback (most recent call last):\r\n File "<stdin>", line 1, in <module>\r\n File "/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/load.py", line 745, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File "/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/builder.py", line 574, in download_and_prepare\r\n self._download_and_prepare(\r\n File "/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/builder.py", line 1139, in _download_and_prepare\r\n super(BeamBasedBuilder, self)._download_and_prepare(\r\n File "/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/builder.py", line 630, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File "/Users/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1/wikipedia.py", line 420, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract({"info": info_url})\r\n File "/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 287, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File "/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 195, in download\r\n downloaded_path_or_paths = map_nested(\r\n File "/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 203, in map_nested\r\n mapped = [\r\n File "/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 204, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File "/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested\r\n return function(data_struct)\r\n File "/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 218, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File "/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 281, in cached_path\r\n output_path = get_from_cache(\r\n File "/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 623, in get_from_cache\r\n raise ConnectionError("Couldn\'t reach {}".format(url))\r\nConnectionError: Couldn\'t reach https://dumps.wikimedia.org/idwiki/20210501/dumpstatus.json\r\n\r\nMoreover the downloading speed for `non-en` language is very very slow. And interestingly the download stopped after approx a couple minutes due to the read time-out. I tried numerous times and the results is same. Is there any feasible way to download non-en language using huggingface?\r\n\r\n> File "/Users/miislamg/opt/anaconda3/envs/proj-semlm/lib/python3.9/site-packages/requests/models.py", line 760, in generate\r\n raise ConnectionError(e)\r\nrequests.exceptions.ConnectionError: HTTPSConnectionPool(host=\'dumps.wikimedia.org\', port=443): Read timed out.\r\nDownloading: 7%|████████▎ | 10.2M/153M [03:35<50:07, 47.4kB/s]'
'Hi ! The link https://dumps.wikimedia.org/idwiki/20210501/dumpstatus.json seems to be working fine for me.\r\n\r\nRegarding the time outs, it must come either from an issue on the wikimedia host side, or from your internet connection.\r\nFeel free to try again several times.'
'I was trying to download dataset for `es` language, however I am getting the following error:\r\n```\r\ndataset = load_dataset(\'wikipedia\', language=\'es\', date="20210320", beam_runner=\'DirectRunner\') \r\n```\r\n\r\n```\r\nDownloading and preparing dataset wikipedia/20210320.es (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /scratch/user_name/datasets/wikipedia/20210320.es-date=20210320,language=es/0.0.0/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1...\r\nTraceback (most recent call last):\r\n File "apache_beam/runners/common.py", line 1233, in apache_beam.runners.common.DoFnRunner.process\r\n File "apache_beam/runners/common.py", line 581, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File "apache_beam/runners/common.py", line 1368, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File "/scratch/user_name/modules/datasets_modules/datasets/wikipedia/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1/wikipedia.py", line 492, in _clean_content\r\n text = _parse_and_clean_wikicode(raw_content, parser=mwparserfromhell)\r\n File "/scratch/user_name/modules/datasets_modules/datasets/wikipedia/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1/wikipedia.py", line 548, in _parse_and_clean_wikicode\r\n section_text.append(section.strip_code().strip())\r\n File "/opt/conda/lib/python3.7/site-packages/mwparserfromhell/wikicode.py", line 639, in strip_code\r\n stripped = node.__strip__(**kwargs)\r\n File "/opt/conda/lib/python3.7/site-packages/mwparserfromhell/nodes/html_entity.py", line 60, in __strip__\r\n return self.normalize()\r\n File "/opt/conda/lib/python3.7/site-packages/mwparserfromhell/nodes/html_entity.py", line 150, in normalize\r\n return chr(htmlentities.name2codepoint[self.value])\r\nKeyError: \'000nbsp\'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File "download_dataset_all.py", line 8, in <module>\r\n dataset = load_dataset(\'wikipedia\', language=language, date="20210320", beam_runner=\'DirectRunner\') \r\n File "/opt/conda/lib/python3.7/site-packages/datasets/load.py", line 748, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File "/opt/conda/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File "/opt/conda/lib/python3.7/site-packages/datasets/builder.py", line 1152, in _download_and_prepare\r\n pipeline_results = pipeline.run()\r\n File "/opt/conda/lib/python3.7/site-packages/apache_beam/pipeline.py", line 564, in run\r\n return self.runner.run_pipeline(self, self._options)\r\n File "/opt/conda/lib/python3.7/site-packages/apache_beam/runners/direct/direct_runner.py", line 131, in run_pipeline\r\n return runner.run_pipeline(pipeline, options)\r\n File "/opt/conda/lib/python3.7/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 190, in run_pipeline\r\n pipeline.to_runner_api(default_environment=self._default_environment))\r\n File "/opt/conda/lib/python3.7/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 200, in run_via_runner_api\r\n return self.run_stages(stage_context, stages)\r\n File "/opt/conda/lib/python3.7/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 366, in run_stages\r\n bundle_context_manager,\r\n File "/opt/conda/lib/python3.7/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 562, in _run_stage\r\n bundle_manager)\r\n File "/opt/conda/lib/python3.7/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 602, in _run_bundle\r\n data_input, data_output, input_timers, expected_timer_output)\r\n File "/opt/conda/lib/python3.7/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 903, in process_bundle\r\n result_future = self._worker_handler.control_conn.push(process_bundle_req)\r\n File "/opt/conda/lib/python3.7/site-packages/apache_beam/runners/portability/fn_api_runner/worker_handlers.py", line 378, in push\r\n response = self.worker.do_instruction(request)\r\n File "/opt/conda/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", line 610, in do_instruction\r\n getattr(request, request_type), request.instruction_id)\r\n File "/opt/conda/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", line 647, in process_bundle\r\n bundle_processor.process_bundle(instruction_id))\r\n File "/opt/conda/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py", line 1001, in process_bundle\r\n element.data)\r\n File "/opt/conda/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py", line 229, in process_encoded\r\n self.output(decoded_value)\r\n File "apache_beam/runners/worker/operations.py", line 356, in apache_beam.runners.worker.operations.Operation.output\r\n File "apache_beam/runners/worker/operations.py", line 358, in apache_beam.runners.worker.operations.Operation.output\r\n File "apache_beam/runners/worker/operations.py", line 220, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File "apache_beam/runners/worker/operations.py", line 717, in apache_beam.runners.worker.operations.DoOperation.process\r\n File "apache_beam/runners/worker/operations.py", line 718, in apache_beam.runners.worker.operations.DoOperation.process\r\n File "apache_beam/runners/common.py", line 1235, in apache_beam.runners.common.DoFnRunner.process\r\n File "apache_beam/runners/common.py", line 1300, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File "apache_beam/runners/common.py", line 1233, in apache_beam.runners.common.DoFnRunner.process\r\n File "apache_beam/runners/common.py", line 581, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File "apache_beam/runners/common.py", line 1395, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File "apache_beam/runners/worker/operations.py", line 220, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File "apache_beam/runners/worker/operations.py", line 717, in apache_beam.runners.worker.operations.DoOperation.process\r\n File "apache_beam/runners/worker/operations.py", line 718, in apache_beam.runners.worker.operations.DoOperation.process\r\n File "apache_beam/runners/common.py", line 1235, in apache_beam.runners.common.DoFnRunner.process\r\n File "apache_beam/runners/common.py", line 1300, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File "apache_beam/runners/common.py", line 1233, in apache_beam.runners.common.DoFnRunner.process\r\n File "apache_beam/runners/common.py", line 581, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File "apache_beam/runners/common.py", line 1395, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File "apache_beam/runners/worker/operations.py", line 220, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File "apache_beam/runners/worker/operations.py", line 717, in apache_beam.runners.worker.operations.DoOperation.process\r\n File "apache_beam/runners/worker/operations.py", line 718, in apache_beam.runners.worker.operations.DoOperation.process\r\n File "apache_beam/runners/common.py", line 1235, in apache_beam.runners.common.DoFnRunner.process\r\n File "apache_beam/runners/common.py", line 1315, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File "/opt/conda/lib/python3.7/site-packages/future/utils/__init__.py", line 446, in raise_with_traceback\r\n raise exc.with_traceback(traceback)\r\n File "apache_beam/runners/common.py", line 1233, in apache_beam.runners.common.DoFnRunner.process\r\n File "apache_beam/runners/common.py", line 581, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File "apache_beam/runners/common.py", line 1368, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File "/scratch/user_name/modules/datasets_modules/datasets/wikipedia/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1/wikipedia.py", line 492, in _clean_content\r\n text = _parse_and_clean_wikicode(raw_content, parser=mwparserfromhell)\r\n File "/scratch/user_name/modules/datasets_modules/datasets/wikipedia/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1/wikipedia.py", line 548, in _parse_and_clean_wikicode\r\n section_text.append(section.strip_code().strip())\r\n File "/opt/conda/lib/python3.7/site-packages/mwparserfromhell/wikicode.py", line 639, in strip_code\r\n stripped = node.__strip__(**kwargs)\r\n File "/opt/conda/lib/python3.7/site-packages/mwparserfromhell/nodes/html_entity.py", line 60, in __strip__\r\n return self.normalize()\r\n File "/opt/conda/lib/python3.7/site-packages/mwparserfromhell/nodes/html_entity.py", line 150, in normalize\r\n return chr(htmlentities.name2codepoint[self.value])\r\nKeyError: "000nbsp [while running \'train/Clean content\']"\r\n```'
'Hi ! This looks related to this issue: https://github.com/huggingface/datasets/issues/1994\r\nBasically the parser that is used (mwparserfromhell) has some issues for some pages in `es`.\r\nWe already reported some issues for `es` on their repo at https://github.com/earwig/mwparserfromhell/issues/247 but it looks like there are still a few issues. Might be a good idea to open a new issue on the mwparserfromhell repo'
'Any updates on this so far?'
'The issue:\r\n```\r\nKeyError: "000nbsp [while running \'train/Clean content\']"\r\n```\r\nreported in comments:\r\n- https://github.com/huggingface/datasets/issues/577#issuecomment-701890059 (by @gaguilar)\r\n- https://github.com/huggingface/datasets/issues/577#issuecomment-879513227 (by @mmiakashs)\r\n\r\nwas normally fixed in the `mwparserfromhell` library and will be accessible in their next release version `0.7`:\r\n- https://github.com/earwig/mwparserfromhell/issues/288'
'mwparserfromhell 0.7 has still not been released, but you might have luck with the dev version:\r\n`pip install git+https://github.com/earwig/mwparserfromhell.git@0f89f44`'] | 2020-09-07 01:16:29+00:00 | 2023-04-11 22:50:48+00:00 | 2022-10-11 11:16:04+00:00 | CONTRIBUTOR | null | Hi,
I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them:
```
import nlp
langs = ['ar'. 'af', 'an']
for lang in langs:
data = nlp.load_dataset('wikipedia', f'20200501.{lang}', beam_runner='DirectRunner', split='train')
print(lang, len(data))
```
Here's what I see for 'ar' (it gets stuck there):
```
Downloading and preparing dataset wikipedia/20200501.ar (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/gaguilar/.cache/huggingface/datasets/wikipedia/20200501.ar/1.0.0/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50...
```
Note that those languages are indeed in the list of expected languages. Any suggestions on how to work around this? Thanks! | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/577/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/577/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/576 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/576/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/576/comments | https://api.github.com/repos/huggingface/datasets/issues/576/events | https://github.com/huggingface/datasets/pull/576 | 694,348,645 | MDExOlB1bGxSZXF1ZXN0NDgwNzM3NDQ1 | 576 | Fix the code block in doc | {'avatar_url': 'https://avatars.githubusercontent.com/u/22514219?v=4', 'events_url': 'https://api.github.com/users/JetRunner/events{/privacy}', 'followers_url': 'https://api.github.com/users/JetRunner/followers', 'following_url': 'https://api.github.com/users/JetRunner/following{/other_user}', 'gists_url': 'https://api.github.com/users/JetRunner/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/JetRunner', 'id': 22514219, 'login': 'JetRunner', 'node_id': 'MDQ6VXNlcjIyNTE0MjE5', 'organizations_url': 'https://api.github.com/users/JetRunner/orgs', 'received_events_url': 'https://api.github.com/users/JetRunner/received_events', 'repos_url': 'https://api.github.com/users/JetRunner/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/JetRunner/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/JetRunner/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/JetRunner'} | [] | closed | false | null | [] | null | ['thanks :)'] | 2020-09-06 11:40:55+00:00 | 2020-09-07 07:37:32+00:00 | 2020-09-07 07:37:18+00:00 | CONTRIBUTOR | null | null | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/576/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/576/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/576.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/576', 'merged_at': '2020-09-07T07:37:18Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/576.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/576'} | true |
https://api.github.com/repos/huggingface/datasets/issues/575 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/575/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/575/comments | https://api.github.com/repos/huggingface/datasets/issues/575/events | https://github.com/huggingface/datasets/issues/575 | 693,691,611 | MDU6SXNzdWU2OTM2OTE2MTE= | 575 | Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading. | {'avatar_url': 'https://avatars.githubusercontent.com/u/488428?v=4', 'events_url': 'https://api.github.com/users/sudarshan85/events{/privacy}', 'followers_url': 'https://api.github.com/users/sudarshan85/followers', 'following_url': 'https://api.github.com/users/sudarshan85/following{/other_user}', 'gists_url': 'https://api.github.com/users/sudarshan85/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/sudarshan85', 'id': 488428, 'login': 'sudarshan85', 'node_id': 'MDQ6VXNlcjQ4ODQyOA==', 'organizations_url': 'https://api.github.com/users/sudarshan85/orgs', 'received_events_url': 'https://api.github.com/users/sudarshan85/received_events', 'repos_url': 'https://api.github.com/users/sudarshan85/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/sudarshan85/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/sudarshan85/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/sudarshan85'} | [] | closed | false | null | [] | null | ["Update:\r\n\r\nThe imdb download completed after a long time (about 45 mins). Ofcourse once download loading was instantaneous. Also, the loaded object was of type `arrow_dataset`. \r\n\r\nThe urls for glue still doesn't work though."
"Thanks for the report, I'll give a look!"
'I am also seeing a similar error when running the following:\r\n\r\n```\r\nimport nlp\r\ndataset = load_dataset(\'cola\')\r\n```\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File "<stdin>", line 1, in <module>\r\n File "/home/js11133/.conda/envs/jiant/lib/python3.8/site-packages/nlp/load.py", line 509, in load_dataset\r\n module_path = prepare_module(path, download_config=download_config, dataset=True)\r\n File "/home/js11133/.conda/envs/jiant/lib/python3.8/site-packages/nlp/load.py", line 248, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File "/home/js11133/.conda/envs/jiant/lib/python3.8/site-packages/nlp/utils/file_utils.py", line 191, in cached_path\r\n output_path = get_from_cache(\r\n File "/home/js11133/.conda/envs/jiant/lib/python3.8/site-packages/nlp/utils/file_utils.py", line 356, in get_from_cache\r\n raise ConnectionError("Couldn\'t reach {}".format(url))\r\nConnectionError: Couldn\'t reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cola/cola.py\r\n```'
'@jeswan `"cola"` is not a valid dataset identifier (you can check the up-to-date list on https://huggingface.co/datasets) but you can find cola inside glue.'
'Ah right. Thanks!'
'Hi. Closing this one since #626 updated the glue urls.\r\n\r\n> 1. Why is it still blocking? Is it still downloading?\r\n\r\nAfter downloading it generates the arrow file by iterating through the examples.\r\nThe number of examples processed by second is shown during the processing (not sure why it was not the case for you)\r\n\r\n> 2. I specified split as train, so why is the test folder being populated?\r\n\r\nIt downloads every split\r\n\r\n\r\n\r\n'] | 2020-09-04 21:46:25+00:00 | 2020-09-22 10:41:36+00:00 | 2020-09-22 10:41:36+00:00 | NONE | null | Hi,
I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset:
```
>>> from nlp import load_dataset
>>> dataset = load_dataset('glue', 'mrpc', split='train')
```
However, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the last few lines):
```
/net/vaosl01/opt/NFS/su0/miniconda3/envs/hf/lib/python3.7/site-packages/nlp/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only)
354 " to False."
355 )
--> 356 raise ConnectionError("Couldn't reach {}".format(url))
357
358 # From now on, connected is True.
ConnectionError: Couldn't reach https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-48f4-b431-7480817f1adc
```
I tried glue with cola and sst2. I got the same error, just instead of mrpc in the URL, it was replaced with cola and sst2.
Since this was not working, I thought I'll try another dataset. So I tried downloading the imdb dataset:
```
ds = load_dataset('imdb', split='train')
```
This downloads the data, but it just blocks after that:
```
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.56k/4.56k [00:00<00:00, 1.38MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.07k/2.07k [00:00<00:00, 1.15MB/s]
Downloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown sizetotal: 207.28 MiB) to /net/vaosl01/opt/NFS/su0/huggingface/datasets/imdb/plain_text/1.0.0/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743...
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 84.1M/84.1M [00:07<00:00, 11.1MB/s]
```
I checked the folder `$HF_HOME/datasets/downloads/extracted/<id>/aclImdb`. This folder is constantly growing in size. When I navigated to the train folder within, there was no file. However, the test folder seemed to be populating. The last time I checked it was 327M. I thought the Imdb dataset was smaller than that. My questions are:
1. Why is it still blocking? Is it still downloading?
2. I specified split as train, so why is the test folder being populated?
3. I read somewhere that after downloading, `nlp` converts the text files into some sort of `arrow` files, which will also take a while. Is this also happening here?
Thanks.
| {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/575/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/575/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/574 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/574/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/574/comments | https://api.github.com/repos/huggingface/datasets/issues/574/events | https://github.com/huggingface/datasets/pull/574 | 693,364,853 | MDExOlB1bGxSZXF1ZXN0NDc5ODU5NzQy | 574 | Add modules cache | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | ['All the tests pass on my side. Not sure if it is a cache issue or a pytest issue or a circleci issue.\r\nEDIT: I have the same error on google colab. Trying to fix that'
"I think I fixed it (sorry didn't notice you were on it as well)"] | 2020-09-04 16:30:03+00:00 | 2020-09-22 10:27:08+00:00 | 2020-09-07 09:01:35+00:00 | MEMBER | null | As discusses in #554 , we should use a module cache directory outside of the python packages directory since we may not have write permissions.
I added a new HF_MODULES_PATH directory that is added to the python path when doing `import nlp`.
In this directory, a module `nlp_modules` is created so that datasets can be added to `nlp_modules.datasets` and metrics to `nlp_modules.metrics`. `nlp_modules` doesn't exist on Pypi.
If someone using cloudpickle still wants to have the downloaded dataset/metrics scripts to be inside the nlp directory, it is still possible to change the environment variable HF_MODULES_CACHE to be a path inside the nlp lib. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/574/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/574/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/574.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/574', 'merged_at': '2020-09-07T09:01:35Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/574.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/574'} | true |
https://api.github.com/repos/huggingface/datasets/issues/573 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/573/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/573/comments | https://api.github.com/repos/huggingface/datasets/issues/573/events | https://github.com/huggingface/datasets/pull/573 | 693,091,790 | MDExOlB1bGxSZXF1ZXN0NDc5NjE4Mzc2 | 573 | Faster caching for text dataset | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | [] | 2020-09-04 11:58:34+00:00 | 2020-09-04 12:53:24+00:00 | 2020-09-04 12:53:23+00:00 | MEMBER | null | As mentioned in #546 and #548 , hashing `data_files` contents to get the cache directory name for a text dataset can take a long time.
To make it faster I changed the hashing so that it takes into account the `path` and the `last modified timestamp` of each data file, instead of iterating through the content of each file to get a hash. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/573/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/573/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/573.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/573', 'merged_at': '2020-09-04T12:53:23Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/573.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/573'} | true |
https://api.github.com/repos/huggingface/datasets/issues/572 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/572/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/572/comments | https://api.github.com/repos/huggingface/datasets/issues/572/events | https://github.com/huggingface/datasets/pull/572 | 692,598,231 | MDExOlB1bGxSZXF1ZXN0NDc5MTgyNDU3 | 572 | Add CLUE Benchmark (11 datasets) | {'avatar_url': 'https://avatars.githubusercontent.com/u/22514219?v=4', 'events_url': 'https://api.github.com/users/JetRunner/events{/privacy}', 'followers_url': 'https://api.github.com/users/JetRunner/followers', 'following_url': 'https://api.github.com/users/JetRunner/following{/other_user}', 'gists_url': 'https://api.github.com/users/JetRunner/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/JetRunner', 'id': 22514219, 'login': 'JetRunner', 'node_id': 'MDQ6VXNlcjIyNTE0MjE5', 'organizations_url': 'https://api.github.com/users/JetRunner/orgs', 'received_events_url': 'https://api.github.com/users/JetRunner/received_events', 'repos_url': 'https://api.github.com/users/JetRunner/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/JetRunner/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/JetRunner/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/JetRunner'} | [] | closed | false | null | [] | null | ["Thanks, @lhoestq! I've addressed the comments. \r\nAlso, I have tried to use `ClassLabel` [when possible](https://github.com/huggingface/nlp/pull/572/files#diff-1026ac7d7b78bf029cb0ebe63162c77dR297). Is there still somewhere else we can use `ClassLabel`? "
'I believe CI failure is unrelated.' 'Great job! '] | 2020-09-04 01:57:40+00:00 | 2020-09-07 09:59:11+00:00 | 2020-09-07 09:59:10+00:00 | CONTRIBUTOR | null | Add 11 tasks of [CLUE](https://github.com/CLUEbenchmark/CLUE). | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/572/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/572/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/572.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/572', 'merged_at': '2020-09-07T09:59:10Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/572.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/572'} | true |
https://api.github.com/repos/huggingface/datasets/issues/571 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/571/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/571/comments | https://api.github.com/repos/huggingface/datasets/issues/571/events | https://github.com/huggingface/datasets/pull/571 | 692,109,287 | MDExOlB1bGxSZXF1ZXN0NDc4NzQ2MjMz | 571 | Serialization | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | ["I've added save/load for dataset dicts.\r\n\r\nI agree that in the future we should also have a way to save indexes too, and also the in-place history of transforms.\r\n\r\nAlso I understand that it would be cool to have the load function directly at the root of the library, but I'm not sure this should be inside `load_dataset` that loads dataset scripts and data from the dataset repository. Maybe something like `load_from_disk` ?"
'Yes `load_from_disk` and `save_to_disk` could work as well.'
'I renamed save/load to save_to_dick/load_from_disk, and I added `nlp.load_from_disk`\r\n\r\n`nlp.load_from_disk` can load either a Dataset or a DatasetDict.'
"Awesome! Let's add them to the doc and we're good to go!"] | 2020-09-03 16:21:38+00:00 | 2020-09-07 07:46:08+00:00 | 2020-09-07 07:46:07+00:00 | MEMBER | null | I added `save` and `load` method to serialize/deserialize a dataset object in a folder.
It moves the arrow files there (or write them if the tables were in memory), and saves the pickle state in a json file `state.json`, except the info that are in a separate file `dataset_info.json`.
Example:
```python
import nlp
squad = nlp.load_dataset("squad", split="train")
squad.save("tmp/squad")
squad = nlp.Dataset.load("tmp/squad")
```
`ls tmp/squad`
```
dataset_info.json squad-train.arrow state.json
```
`cat tmp/squad/state.json`
```json
{
"_data": null,
"_data_files": [
{
"filename": "squad-train.arrow",
"skip": 0,
"take": 87599
}
],
"_fingerprint": "61f452797a686bc1",
"_format_columns": null,
"_format_kwargs": {},
"_format_type": null,
"_indexes": {},
"_indices": null,
"_indices_data_files": [],
"_inplace_history": [
{
"transforms": []
}
],
"_output_all_columns": false,
"_split": "train"
}
```
`cat tmp/squad/dataset_info.json`
```json
{
"builder_name": "squad",
"citation": "@article{2016arXiv160605250R,\n author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},\n Konstantin and {Liang}, Percy},\n title = \"{SQuAD: 100,000+ Questions for Machine Comprehension of Text}\",\n journal = {arXiv e-prints},\n year = 2016,\n eid = {arXiv:1606.05250},\n pages = {arXiv:1606.05250},\narchivePrefix = {arXiv},\n eprint = {1606.05250},\n}\n",
"config_name": "plain_text",
"dataset_size": 89789763,
"description": "Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.\n",
"download_checksums": {
"https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json": {
"checksum": "95aa6a52d5d6a735563366753ca50492a658031da74f301ac5238b03966972c9",
"num_bytes": 4854279
},
"https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json": {
"checksum": "3527663986b8295af4f7fcdff1ba1ff3f72d07d61a20f487cb238a6ef92fd955",
"num_bytes": 30288272
}
},
"download_size": 35142551,
"features": {
"answers": {
"_type": "Sequence",
"feature": {
"answer_start": {
"_type": "Value",
"dtype": "int32",
"id": null
},
"text": {
"_type": "Value",
"dtype": "string",
"id": null
}
},
"id": null,
"length": -1
},
"context": {
"_type": "Value",
"dtype": "string",
"id": null
},
"id": {
"_type": "Value",
"dtype": "string",
"id": null
},
"question": {
"_type": "Value",
"dtype": "string",
"id": null
},
"title": {
"_type": "Value",
"dtype": "string",
"id": null
}
},
"homepage": "https://rajpurkar.github.io/SQuAD-explorer/",
"license": "",
"post_processed": {
"features": null,
"resources_checksums": {
"train": {},
"train[:10%]": {}
}
},
"post_processing_size": 0,
"size_in_bytes": 124932314,
"splits": {
"train": {
"dataset_name": "squad",
"name": "train",
"num_bytes": 79317110,
"num_examples": 87599
},
"validation": {
"dataset_name": "squad",
"name": "validation",
"num_bytes": 10472653,
"num_examples": 10570
}
},
"supervised_keys": null,
"version": {
"description": "New split API (https://tensorflow.org/datasets/splits)",
"major": 1,
"minor": 0,
"nlp_version_to_prepare": null,
"patch": 0,
"version_str": "1.0.0"
}
}
``` | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/571/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/571/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/571.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/571', 'merged_at': '2020-09-07T07:46:07Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/571.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/571'} | true |
https://api.github.com/repos/huggingface/datasets/issues/570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/570/comments | https://api.github.com/repos/huggingface/datasets/issues/570/events | https://github.com/huggingface/datasets/pull/570 | 691,846,397 | MDExOlB1bGxSZXF1ZXN0NDc4NTI3OTQz | 570 | add reuters21578 dataset | {'avatar_url': 'https://avatars.githubusercontent.com/u/959590?v=4', 'events_url': 'https://api.github.com/users/jplu/events{/privacy}', 'followers_url': 'https://api.github.com/users/jplu/followers', 'following_url': 'https://api.github.com/users/jplu/following{/other_user}', 'gists_url': 'https://api.github.com/users/jplu/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/jplu', 'id': 959590, 'login': 'jplu', 'node_id': 'MDQ6VXNlcjk1OTU5MA==', 'organizations_url': 'https://api.github.com/users/jplu/orgs', 'received_events_url': 'https://api.github.com/users/jplu/received_events', 'repos_url': 'https://api.github.com/users/jplu/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/jplu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jplu/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/jplu'} | [] | closed | false | null | [] | null | [] | 2020-09-03 10:25:47+00:00 | 2020-09-03 10:46:52+00:00 | 2020-09-03 10:46:51+00:00 | CONTRIBUTOR | null | Reopen a PR this the merge. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/570/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/570/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/570.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/570', 'merged_at': '2020-09-03T10:46:51Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/570.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/570'} | true |
https://api.github.com/repos/huggingface/datasets/issues/569 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/569/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/569/comments | https://api.github.com/repos/huggingface/datasets/issues/569/events | https://github.com/huggingface/datasets/pull/569 | 691,832,720 | MDExOlB1bGxSZXF1ZXN0NDc4NTE2Mzc2 | 569 | Revert "add reuters21578 dataset" | {'avatar_url': 'https://avatars.githubusercontent.com/u/959590?v=4', 'events_url': 'https://api.github.com/users/jplu/events{/privacy}', 'followers_url': 'https://api.github.com/users/jplu/followers', 'following_url': 'https://api.github.com/users/jplu/following{/other_user}', 'gists_url': 'https://api.github.com/users/jplu/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/jplu', 'id': 959590, 'login': 'jplu', 'node_id': 'MDQ6VXNlcjk1OTU5MA==', 'organizations_url': 'https://api.github.com/users/jplu/orgs', 'received_events_url': 'https://api.github.com/users/jplu/received_events', 'repos_url': 'https://api.github.com/users/jplu/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/jplu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jplu/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/jplu'} | [] | closed | false | null | [] | null | [] | 2020-09-03 10:06:16+00:00 | 2020-09-03 10:07:13+00:00 | 2020-09-03 10:07:12+00:00 | CONTRIBUTOR | null | Reverts huggingface/nlp#471 | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/569/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/569/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/569.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/569', 'merged_at': '2020-09-03T10:07:12Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/569.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/569'} | true |
https://api.github.com/repos/huggingface/datasets/issues/568 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/568/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/568/comments | https://api.github.com/repos/huggingface/datasets/issues/568/events | https://github.com/huggingface/datasets/issues/568 | 691,638,656 | MDU6SXNzdWU2OTE2Mzg2NTY= | 568 | `metric.compute` throws `ArrowInvalid` error | {'avatar_url': 'https://avatars.githubusercontent.com/u/2287797?v=4', 'events_url': 'https://api.github.com/users/ibeltagy/events{/privacy}', 'followers_url': 'https://api.github.com/users/ibeltagy/followers', 'following_url': 'https://api.github.com/users/ibeltagy/following{/other_user}', 'gists_url': 'https://api.github.com/users/ibeltagy/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/ibeltagy', 'id': 2287797, 'login': 'ibeltagy', 'node_id': 'MDQ6VXNlcjIyODc3OTc=', 'organizations_url': 'https://api.github.com/users/ibeltagy/orgs', 'received_events_url': 'https://api.github.com/users/ibeltagy/received_events', 'repos_url': 'https://api.github.com/users/ibeltagy/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/ibeltagy/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/ibeltagy/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/ibeltagy'} | [] | closed | false | null | [] | null | ['Hmm might be related to what we are solving in #564'
"Could you try to update to `datasets>=1.0.0` (we changed the name of the library) and try again ?\r\nIf is was related to the distributed setup settings it must be fixed.\r\nIf it was related to empty metric inputs it's going to be fixed in #654 "
'Closing this one as it was fixed in #654 \r\nFeel free to re-open if you have other questions'] | 2020-09-03 04:56:57+00:00 | 2020-10-05 16:33:53+00:00 | 2020-10-05 16:33:53+00:00 | NONE | null | I get the following error with `rouge.compute`. It happens only with distributed training, and it occurs randomly I can't easily reproduce it. This is using `nlp==0.4.0`
```
File "/home/beltagy/trainer.py", line 92, in validation_step
rouge_scores = rouge.compute(predictions=generated_str, references=gold_str, rouge_types=['rouge2', 'rouge1', 'rougeL'])
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/metric.py", line 224, in compute
self.finalize(timeout=timeout)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/metric.py", line 213, in finalize
self.data = Dataset(**reader.read_files(node_files))
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 217, in read_files
dataset_kwargs = self._read_files(files=files, info=self._info, original_instructions=original_instructions)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 162, in _read_files
pa_table: pa.Table = self._get_dataset_from_filename(f_dict)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 276, in _get_dataset_from_filename
f = pa.ipc.open_stream(mmap)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/pyarrow/ipc.py", line 173, in open_stream
return RecordBatchStreamReader(source)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/pyarrow/ipc.py", line 64, in __init__
self._open(source)
File "pyarrow/ipc.pxi", line 469, in pyarrow.lib._RecordBatchStreamReader._open
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Tried reading schema message, was null or length 0
``` | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/568/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/568/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/567 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/567/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/567/comments | https://api.github.com/repos/huggingface/datasets/issues/567/events | https://github.com/huggingface/datasets/pull/567 | 691,430,245 | MDExOlB1bGxSZXF1ZXN0NDc4MTc2Njgx | 567 | Fix BLEURT metrics for backward compatibility | {'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/thomwolf', 'id': 7353373, 'login': 'thomwolf', 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/thomwolf'} | [] | closed | false | null | [] | null | [] | 2020-09-02 21:22:35+00:00 | 2020-09-03 07:29:52+00:00 | 2020-09-03 07:29:50+00:00 | MEMBER | null | Fix #565 | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/567/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/567/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/567.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/567', 'merged_at': '2020-09-03T07:29:50Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/567.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/567'} | true |
https://api.github.com/repos/huggingface/datasets/issues/566 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/566/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/566/comments | https://api.github.com/repos/huggingface/datasets/issues/566/events | https://github.com/huggingface/datasets/pull/566 | 691,160,208 | MDExOlB1bGxSZXF1ZXN0NDc3OTM2NTIz | 566 | Remove logger pickling to fix gg colab issues | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | [] | 2020-09-02 16:16:21+00:00 | 2020-09-03 16:31:53+00:00 | 2020-09-03 16:31:52+00:00 | MEMBER | null | A `logger` objects are not picklable in google colab, contrary to `logger` objects in jupyter notebooks or in python shells.
It creates some issues in google colab right now.
Indeed by calling any `Dataset` method, the fingerprint update pickles the transform function, and as the logger comes with it, it results in an error (full stacktrace [here](http://pastebin.fr/64330)):
```python
/usr/local/lib/python3.6/dist-packages/zmq/backend/cython/socket.cpython-36m-x86_64-linux-gnu.so in zmq.backend.cython.socket.Socket.__reduce_cython__()
TypeError: no default __reduce__ due to non-trivial __cinit__
```
To fix that I no longer dump the transform (`_map_single`, `select`, etc.), but the full name only (`nlp.arrow_dataset.Dataset._map_single`, `nlp.arrow_dataset.Dataset.select`, etc.) | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/566/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/566/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/566.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/566', 'merged_at': '2020-09-03T16:31:52Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/566.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/566'} | true |
https://api.github.com/repos/huggingface/datasets/issues/565 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/565/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/565/comments | https://api.github.com/repos/huggingface/datasets/issues/565/events | https://github.com/huggingface/datasets/issues/565 | 691,039,121 | MDU6SXNzdWU2OTEwMzkxMjE= | 565 | No module named 'nlp.logging' | {'avatar_url': 'https://avatars.githubusercontent.com/u/66633754?v=4', 'events_url': 'https://api.github.com/users/melody-ju/events{/privacy}', 'followers_url': 'https://api.github.com/users/melody-ju/followers', 'following_url': 'https://api.github.com/users/melody-ju/following{/other_user}', 'gists_url': 'https://api.github.com/users/melody-ju/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/melody-ju', 'id': 66633754, 'login': 'melody-ju', 'node_id': 'MDQ6VXNlcjY2NjMzNzU0', 'organizations_url': 'https://api.github.com/users/melody-ju/orgs', 'received_events_url': 'https://api.github.com/users/melody-ju/received_events', 'repos_url': 'https://api.github.com/users/melody-ju/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/melody-ju/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/melody-ju/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/melody-ju'} | [] | closed | false | null | [] | null | ['Thanks for reporting.\r\n\r\nApparently this is a versioning issue: the lib downloaded the `bleurt` script from the master branch where we did this change recently. We\'ll fix that in a new release this week or early next week. Cc @thomwolf \r\n\r\nUntil that, I\'d suggest you to download the right bleurt folder from github ([this one](https://github.com/huggingface/nlp/tree/0.4.0/metrics/bleurt)) and do\r\n\r\n```python\r\nfrom nlp import load_metric\r\n\r\nbleurt = load_metric("path/to/bleurt/folder")\r\n```\r\n\r\nTo download it you can either clone the repo or download the `bleurt.py` file and place it in a folder named `bleurt` '
"Actually we can fix this on our side, this script didn't had to be updated. I'll do it in a few minutes"] | 2020-09-02 13:49:50+00:00 | 2020-09-03 07:29:50+00:00 | 2020-09-03 07:29:50+00:00 | NONE | null | Hi, I am using nlp version 0.4.0. Trying to use bleurt as an eval metric, however, the bleurt script imports nlp.logging which creates the following error. What am I missing?
```
>>> import nlp
2020-09-02 13:47:09.210310: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
>>> bleurt = nlp.load_metric("bleurt")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/load.py", line 443, in load_metric
metric_cls = import_main_class(module_path, dataset=False)
File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/load.py", line 61, in import_main_class
module = importlib.import_module(module_path)
File "/home/melody/anaconda3/envs/transformers/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/metrics/bleurt/43448cf2959ea81d3ae0e71c5c8ee31dc15eed9932f197f5f50673cbcecff2b5/bleurt.py", line 20, in <module>
from nlp.logging import get_logger
ModuleNotFoundError: No module named 'nlp.logging'
```
Just to show once again that I can't import the logging module:
```
>>> import nlp
2020-09-02 13:48:38.190621: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
>>> nlp.__version__
'0.4.0'
>>> from nlp.logging import get_logger
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'nlp.logging'
``` | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/565/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/565/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/564 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/564/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/564/comments | https://api.github.com/repos/huggingface/datasets/issues/564/events | https://github.com/huggingface/datasets/pull/564 | 691,000,020 | MDExOlB1bGxSZXF1ZXN0NDc3ODAyMTk2 | 564 | Wait for writing in distributed metrics | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | ['I agree this fix the problem for the CI where the files are always created in a new and clean temporary directory.\r\n\r\nHowever, in a general setting of a succession of fast distributed operation, the files could already exist from previous metrics runs but one process may still finish before another has even started in which case it would mix results from separate operations.\r\n\r\nI feel like the most robust way to solve this is to setup a rendez-vous on the first time we write on files and where each process will test and only finish its operation when it cannot acquire a lock on all the other processes (meaning they all have started).\r\n\r\nWhat do you think?'
'What do you think of this @thomwolf ? I check all the locks before finalizing'
'Ok on my side @lhoestq (cannot add you as a reviewer)'
"The test doesn't pass if I add:\r\n```python\r\n import time\r\n if self.process_id == 1:\r\n time.sleep(0.5)\r\n```\r\nright before `self.add_batch` in `Metric.compute`.\r\n\r\nI'm investigating why it doesn't work in that case"
'It looks like the process 1 runs `_check_all_processes_locks` correctly and then finishes and releases its lock before process 0 even managed to to run `_check_all_processes_locks` correctly.'
'Strange!'
'I changed the way the rendez-vous is done @thomwolf , let me know what you think.\r\nThe idea is that the master process has an additional lock `rendez_vous_lock` to tell every other process to wait for everyone to be ready before starting to write'] | 2020-09-02 12:58:50+00:00 | 2020-09-09 09:13:23+00:00 | 2020-09-09 09:13:22+00:00 | MEMBER | null | There were CI bugs where a distributed metric would try to read all the files in process 0 while the other processes haven't started writing.
To fix that I added a custom locking mechanism that waits for the file to exist before trying to read it | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/564/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/564/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/564.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/564', 'merged_at': '2020-09-09T09:13:22Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/564.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/564'} | true |
https://api.github.com/repos/huggingface/datasets/issues/563 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/563/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/563/comments | https://api.github.com/repos/huggingface/datasets/issues/563/events | https://github.com/huggingface/datasets/pull/563 | 690,908,674 | MDExOlB1bGxSZXF1ZXN0NDc3NzI2MTEz | 563 | [Large datasets] Speed up download and processing | {'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/thomwolf', 'id': 7353373, 'login': 'thomwolf', 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/thomwolf'} | [] | closed | false | null | [] | null | ['Looks all good :)\r\nI rebased from master and added a test for parallel `map_nested`'
"you're da best"] | 2020-09-02 10:31:54+00:00 | 2020-09-09 09:03:33+00:00 | 2020-09-09 09:03:32+00:00 | MEMBER | null | Various improvements to speed-up creation and processing of large scale datasets.
Currently:
- distributed downloads
- remove etag from datafiles hashes to spare a request when restarting a failed download | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/563/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/563/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/563.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/563', 'merged_at': '2020-09-09T09:03:32Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/563.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/563'} | true |
https://api.github.com/repos/huggingface/datasets/issues/562 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/562/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/562/comments | https://api.github.com/repos/huggingface/datasets/issues/562/events | https://github.com/huggingface/datasets/pull/562 | 690,907,604 | MDExOlB1bGxSZXF1ZXN0NDc3NzI1MjMx | 562 | [Reproductibility] Allow to pin versions of datasets/metrics | {'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/thomwolf', 'id': 7353373, 'login': 'thomwolf', 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/thomwolf'} | [] | closed | false | null | [] | null | ['Closing this one in favor of #584 '] | 2020-09-02 10:30:13+00:00 | 2023-09-24 09:49:42+00:00 | 2020-09-09 13:04:54+00:00 | MEMBER | null | Repurpose the `version` attribute in datasets and metrics to let the user pin a specific version of datasets and metric scripts:
```
dataset = nlp.load_dataset('squad', version='1.0.0')
metric = nlp.load_metric('squad', version='1.0.0')
```
Notes:
- version number are the release version of the library
- currently only possible for canonical datasets/metrics, ie. integrated in the GitHub repo of the library | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/562/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/562/timeline | null | null | 1 | {'diff_url': 'https://github.com/huggingface/datasets/pull/562.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/562', 'merged_at': None, 'patch_url': 'https://github.com/huggingface/datasets/pull/562.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/562'} | true |
https://api.github.com/repos/huggingface/datasets/issues/561 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/561/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/561/comments | https://api.github.com/repos/huggingface/datasets/issues/561/events | https://github.com/huggingface/datasets/pull/561 | 690,871,415 | MDExOlB1bGxSZXF1ZXN0NDc3Njk1NDQy | 561 | Made `share_dataset` more readable | {'avatar_url': 'https://avatars.githubusercontent.com/u/26709476?v=4', 'events_url': 'https://api.github.com/users/TevenLeScao/events{/privacy}', 'followers_url': 'https://api.github.com/users/TevenLeScao/followers', 'following_url': 'https://api.github.com/users/TevenLeScao/following{/other_user}', 'gists_url': 'https://api.github.com/users/TevenLeScao/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/TevenLeScao', 'id': 26709476, 'login': 'TevenLeScao', 'node_id': 'MDQ6VXNlcjI2NzA5NDc2', 'organizations_url': 'https://api.github.com/users/TevenLeScao/orgs', 'received_events_url': 'https://api.github.com/users/TevenLeScao/received_events', 'repos_url': 'https://api.github.com/users/TevenLeScao/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/TevenLeScao/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/TevenLeScao'} | [] | closed | false | null | [] | null | [] | 2020-09-02 09:34:48+00:00 | 2020-09-03 09:00:30+00:00 | 2020-09-03 09:00:29+00:00 | CONTRIBUTOR | null | null | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/561/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/561/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/561.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/561', 'merged_at': '2020-09-03T09:00:29Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/561.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/561'} | true |
https://api.github.com/repos/huggingface/datasets/issues/560 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/560/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/560/comments | https://api.github.com/repos/huggingface/datasets/issues/560/events | https://github.com/huggingface/datasets/issues/560 | 690,488,764 | MDU6SXNzdWU2OTA0ODg3NjQ= | 560 | Using custom DownloadConfig results in an error | {'avatar_url': 'https://avatars.githubusercontent.com/u/1789921?v=4', 'events_url': 'https://api.github.com/users/ynouri/events{/privacy}', 'followers_url': 'https://api.github.com/users/ynouri/followers', 'following_url': 'https://api.github.com/users/ynouri/following{/other_user}', 'gists_url': 'https://api.github.com/users/ynouri/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/ynouri', 'id': 1789921, 'login': 'ynouri', 'node_id': 'MDQ6VXNlcjE3ODk5MjE=', 'organizations_url': 'https://api.github.com/users/ynouri/orgs', 'received_events_url': 'https://api.github.com/users/ynouri/received_events', 'repos_url': 'https://api.github.com/users/ynouri/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/ynouri/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/ynouri/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/ynouri'} | [] | closed | false | null | [] | null | ['From my limited understanding, part of the issue seems related to the `prepare_module` and `download_and_prepare` functions each handling the case where no config is passed. For example, `prepare_module` does mutate the object passed and forces the flags `extract_compressed_file` and `force_extract` to `True`.\r\n\r\nSee:\r\n* https://github.com/huggingface/nlp/blob/5fb61e1012bda724a9b6b847307d90a1380abfa5/src/nlp/load.py#L227\r\n* https://github.com/huggingface/nlp/blob/5fb61e1012bda724a9b6b847307d90a1380abfa5/src/nlp/builder.py#L388\r\n\r\nMaybe a cleaner solution would be to always instantiate a default `DownloadConfig` object at the top-level, have it as non-optional for the lower-level functions and treat it as immutable. '
"Thanks for the report, I'll take a look.\r\n\r\nWhat is your specific use-case for providing a DownloadConfig object?\r\n"
"Thanks. Our use case involves running a training job behind a corporate firewall with no access to any external resources (S3, GCP or other web resources).\r\n\r\nI was thinking about a 2-steps process:\r\n1) Download the resources / artifacts using some secure corporate channel, ie run `nlp.load_dataset()` without a specific `DownloadConfig`. After that, collect the files from the `$HF_HOME` folder\r\n2) Copy the `$HF_HOME` folder in the firewalled environment. Run `nlp.load_dataset()` with a custom config `DownloadConfig(local_files_only=True)`\r\n\r\nHowever this ends up a bit clunky in practice, even when solving the `DownloadConfig` issue above. For example, the `filename` hash computed in `get_from_cache()` differs in the `local_files_only=False` vs `local_files_only=True` case (local case defaults `etag` to `None`, which results in a different hash). So effectively step 2) above doesn't work because the hash computed differs from the hash in the cache folder. Some hacks / workaround are possible but this solution becomes very convoluted.\r\nhttps://github.com/huggingface/nlp/blob/c214aa5a4430c1df1bcd0619fd94d6abdf9d2da7/src/nlp/utils/file_utils.py#L417\r\n\r\nWould you recommend a different path?\r\n"
'I see.\r\n\r\nProbably the easiest way for you would be that we add simple serialization/deserialization methods to the Dataset and DatasetDict objects once the data files have been downloaded and all the dataset is processed.\r\n\r\nWhat do you think @lhoestq ?'
'This use-case will be solved with #571 '
'Thank you very much @thomwolf and @lhoestq we will give it a try'] | 2020-09-01 22:23:02+00:00 | 2022-10-04 17:23:45+00:00 | 2022-10-04 17:23:45+00:00 | NONE | null | ## Version / Environment
Ubuntu 18.04
Python 3.6.8
nlp 0.4.0
## Description
Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` function, this results in an error.
## How to reproduce
### Example without DownloadConfig --> works
```python
import os
os.environ["HF_HOME"] = "/data/hf-test-without-dl-config-01/"
import logging
import nlp
logging.basicConfig(level=logging.INFO)
if __name__ == "__main__":
imdb = nlp.load_dataset(path="imdb")
```
### Example with DownloadConfig --> doesn't work
```python
import os
os.environ["HF_HOME"] = "/data/hf-test-with-dl-config-01/"
import logging
import nlp
from nlp.utils import DownloadConfig
logging.basicConfig(level=logging.INFO)
if __name__ == "__main__":
download_config = DownloadConfig()
imdb = nlp.load_dataset(path="imdb", download_config=download_config)
```
Error traceback:
```
Traceback (most recent call last):
File "/.../example_with_dl_config.py", line 13, in <module>
imdb = nlp.load_dataset(path="imdb", download_config=download_config)
File "/.../python3.6/python3.6/site-packages/nlp/load.py", line 549, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/.../python3.6/python3.6/site-packages/nlp/builder.py", line 463, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/.../python3.6/python3.6/site-packages/nlp/builder.py", line 518, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/.../python3.6/python3.6/site-packages/nlp/datasets/imdb/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743/imdb.py", line 86, in _split_generators
arch_path = dl_manager.download_and_extract(_DOWNLOAD_URL)
File "/.../python3.6/python3.6/site-packages/nlp/utils/download_manager.py", line 220, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/.../python3.6/python3.6/site-packages/nlp/utils/download_manager.py", line 158, in download
self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)
File "/.../python3.6/python3.6/site-packages/nlp/utils/download_manager.py", line 108, in _record_sizes_checksums
self._recorded_sizes_checksums[url] = get_size_checksum_dict(path)
File "/.../python3.6/python3.6/site-packages/nlp/utils/info_utils.py", line 79, in get_size_checksum_dict
with open(path, "rb") as f:
IsADirectoryError: [Errno 21] Is a directory: '/data/hf-test-with-dl-config-01/datasets/extracted/b6802c5b61824b2c1f7dbf7cda6696b5f2e22214e18d171ce1ed3be90c931ce5'
```
| {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/560/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/560/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/559 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/559/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/559/comments | https://api.github.com/repos/huggingface/datasets/issues/559/events | https://github.com/huggingface/datasets/pull/559 | 690,411,263 | MDExOlB1bGxSZXF1ZXN0NDc3MzAzOTM2 | 559 | Adding the KILT knowledge source and tasks | {'avatar_url': 'https://avatars.githubusercontent.com/u/10469459?v=4', 'events_url': 'https://api.github.com/users/yjernite/events{/privacy}', 'followers_url': 'https://api.github.com/users/yjernite/followers', 'following_url': 'https://api.github.com/users/yjernite/following{/other_user}', 'gists_url': 'https://api.github.com/users/yjernite/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/yjernite', 'id': 10469459, 'login': 'yjernite', 'node_id': 'MDQ6VXNlcjEwNDY5NDU5', 'organizations_url': 'https://api.github.com/users/yjernite/orgs', 'received_events_url': 'https://api.github.com/users/yjernite/received_events', 'repos_url': 'https://api.github.com/users/yjernite/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/yjernite/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/yjernite/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/yjernite'} | [] | closed | false | null | [] | null | ['Feel free to merge when you are happy with it @yjernite :-)'] | 2020-09-01 20:05:13+00:00 | 2020-09-04 18:05:47+00:00 | 2020-09-04 18:05:47+00:00 | MEMBER | null | This adds Wikipedia pre-processed for KILT, as well as the task data. Only the question IDs are provided for TriviaQA, but they can easily be mapped back with:
```
import nlp
kilt_wikipedia = nlp.load_dataset('kilt_wikipedia')
kilt_tasks = nlp.load_dataset('kilt_tasks')
triviaqa = nlp.load_dataset('trivia_qa', 'unfiltered.nocontext')
triviaqa_map = {}
for k in ['train', 'validation', 'test']:
triviaqa_map = dict([(q_id, i) for i, q_id in enumerate(triviaqa[k]['question_id'])])
kilt_tasks[k + '_triviaqa'] = kilt_tasks[k + '_triviaqa'].filter(lambda x: x['id'] in triviaqa_map)
kilt_tasks[k + '_triviaqa'].map(lambda x: {'input': triviaqa[split][triviaqa_map[x['id']]]['question']})
```
It would be great to have the dataset by Monday, which is when the paper should land on Arxiv and @fabiopetroni is planning on tweeting about the paper and `facebookresearch` repository for the datasett | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/559/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/559/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/559.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/559', 'merged_at': '2020-09-04T18:05:47Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/559.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/559'} | true |
https://api.github.com/repos/huggingface/datasets/issues/558 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/558/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/558/comments | https://api.github.com/repos/huggingface/datasets/issues/558/events | https://github.com/huggingface/datasets/pull/558 | 690,318,105 | MDExOlB1bGxSZXF1ZXN0NDc3MjI2ODA0 | 558 | Rerun pip install -e | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | [] | 2020-09-01 17:24:39+00:00 | 2020-09-01 17:24:51+00:00 | 2020-09-01 17:24:50+00:00 | MEMBER | null | Hopefully it fixes the github actions | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/558/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/558/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/558.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/558', 'merged_at': '2020-09-01T17:24:50Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/558.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/558'} | true |
https://api.github.com/repos/huggingface/datasets/issues/557 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/557/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/557/comments | https://api.github.com/repos/huggingface/datasets/issues/557/events | https://github.com/huggingface/datasets/pull/557 | 690,220,135 | MDExOlB1bGxSZXF1ZXN0NDc3MTQ1NjAx | 557 | Fix a few typos | {'avatar_url': 'https://avatars.githubusercontent.com/u/326577?v=4', 'events_url': 'https://api.github.com/users/julien-c/events{/privacy}', 'followers_url': 'https://api.github.com/users/julien-c/followers', 'following_url': 'https://api.github.com/users/julien-c/following{/other_user}', 'gists_url': 'https://api.github.com/users/julien-c/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/julien-c', 'id': 326577, 'login': 'julien-c', 'node_id': 'MDQ6VXNlcjMyNjU3Nw==', 'organizations_url': 'https://api.github.com/users/julien-c/orgs', 'received_events_url': 'https://api.github.com/users/julien-c/received_events', 'repos_url': 'https://api.github.com/users/julien-c/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/julien-c/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/julien-c/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/julien-c'} | [] | closed | false | null | [] | null | [] | 2020-09-01 15:03:24+00:00 | 2020-09-02 07:39:08+00:00 | 2020-09-02 07:39:07+00:00 | MEMBER | null | null | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/557/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/557/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/557.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/557', 'merged_at': '2020-09-02T07:39:06Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/557.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/557'} | true |
https://api.github.com/repos/huggingface/datasets/issues/556 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/556/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/556/comments | https://api.github.com/repos/huggingface/datasets/issues/556/events | https://github.com/huggingface/datasets/pull/556 | 690,218,423 | MDExOlB1bGxSZXF1ZXN0NDc3MTQ0MTky | 556 | Add DailyDialog | {'avatar_url': 'https://avatars.githubusercontent.com/u/326577?v=4', 'events_url': 'https://api.github.com/users/julien-c/events{/privacy}', 'followers_url': 'https://api.github.com/users/julien-c/followers', 'following_url': 'https://api.github.com/users/julien-c/following{/other_user}', 'gists_url': 'https://api.github.com/users/julien-c/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/julien-c', 'id': 326577, 'login': 'julien-c', 'node_id': 'MDQ6VXNlcjMyNjU3Nw==', 'organizations_url': 'https://api.github.com/users/julien-c/orgs', 'received_events_url': 'https://api.github.com/users/julien-c/received_events', 'repos_url': 'https://api.github.com/users/julien-c/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/julien-c/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/julien-c/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/julien-c'} | [] | closed | false | null | [] | null | [] | 2020-09-01 15:01:15+00:00 | 2020-09-03 15:42:03+00:00 | 2020-09-03 15:38:39+00:00 | MEMBER | null | http://yanran.li/dailydialog.html
https://arxiv.org/pdf/1710.03957.pdf
| {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/556/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/556/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/556.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/556', 'merged_at': '2020-09-03T15:38:39Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/556.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/556'} | true |
https://api.github.com/repos/huggingface/datasets/issues/555 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/555/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/555/comments | https://api.github.com/repos/huggingface/datasets/issues/555/events | https://github.com/huggingface/datasets/pull/555 | 690,197,725 | MDExOlB1bGxSZXF1ZXN0NDc3MTI2OTIy | 555 | Upgrade pip in benchmark github action | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | [] | 2020-09-01 14:37:26+00:00 | 2020-09-01 15:26:16+00:00 | 2020-09-01 15:26:15+00:00 | MEMBER | null | It looks like it fixes the `import nlp` issue we have | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/555/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/555/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/555.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/555', 'merged_at': '2020-09-01T15:26:15Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/555.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/555'} | true |
https://api.github.com/repos/huggingface/datasets/issues/554 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/554/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/554/comments | https://api.github.com/repos/huggingface/datasets/issues/554/events | https://github.com/huggingface/datasets/issues/554 | 690,173,214 | MDU6SXNzdWU2OTAxNzMyMTQ= | 554 | nlp downloads to its module path | {'avatar_url': 'https://avatars.githubusercontent.com/u/49398?v=4', 'events_url': 'https://api.github.com/users/danieldk/events{/privacy}', 'followers_url': 'https://api.github.com/users/danieldk/followers', 'following_url': 'https://api.github.com/users/danieldk/following{/other_user}', 'gists_url': 'https://api.github.com/users/danieldk/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/danieldk', 'id': 49398, 'login': 'danieldk', 'node_id': 'MDQ6VXNlcjQ5Mzk4', 'organizations_url': 'https://api.github.com/users/danieldk/orgs', 'received_events_url': 'https://api.github.com/users/danieldk/received_events', 'repos_url': 'https://api.github.com/users/danieldk/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/danieldk/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/danieldk/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/danieldk'} | [] | closed | false | null | [] | null | ['Indeed this is a known issue arising from the fact that we try to be compatible with cloupickle.\r\n\r\nDoes this also happen if you are installing in a virtual environment?'
'> Indeed this is a know issue with the fact that we try to be compatible with cloupickle.\r\n> \r\n> Does this also happen if you are installing in a virtual environment?\r\n\r\nThen it would work, because the package is in a writable path.'
"If it's fine for you then this is the recommended way to solve this issue."
"> If it's fine for you then this is the recommended way to solve this issue.\r\n\r\nI don't want to use a virtual environment, because Nix is fully reproducible, and virtual environments are not. And I am the maintainer of the `transformers` in nixpkgs, so sooner or later I will have to package `nlp`, since it is becoming a dependency of `transformers` ;)."
"Ok interesting. We could have another check to see if it's possible to download and import the datasets script at another location than the module path. I think this would probably involve tweaking the python system path dynamically.\r\n\r\nI don't know anything about Nix so if you want to give this a try your self we can guide you or you can give us more information on your general project and how this works.\r\n\r\nRegarding `nlp` and `transformers`, we are not sure `nlp` will become a required dependency for `transformers`. It will probably be used a lot in the examples but I think it probably won't be a required dependency for the main package since we try to keep it as light as possible in terms of deps.\r\n\r\nHappy to help you make all these things work better for your use-case "
'@danieldk modules are now installed in a different location (by default in the cache directory of the lib, in `~/.cache/huggingface/modules`). You can also change that using the environment variable `HF_MODULES_PATH`\r\n\r\nFeel free to play with this change from the master branch for now, and let us know if it sounds good for you :)\r\nWe plan to do a release in the next coming days'
'Awesome! I’ll hopefully have some time in the coming days to try this.'
'> Feel free to play with this change from the master branch for now, and let us know if it sounds good for you :)\r\n> We plan to do a release in the next coming days\r\n\r\nThanks for making this change! I just packaged the latest commit on master and it works like a charm now! :partying_face: '] | 2020-09-01 14:06:14+00:00 | 2020-09-11 06:19:24+00:00 | 2020-09-11 06:19:24+00:00 | NONE | null | I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems:
```>>> import nlp
>>> squad_dataset = nlp.load_dataset('squad')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/load.py", line 530, in load_dataset
module_path, hash = prepare_module(path, download_config=download_config, dataset=True)
File "/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/load.py", line 329, in prepare_module
os.makedirs(main_folder_path, exist_ok=True)
File "/nix/store/685kq8pyhrvajah1hdsfn4q7gm3j4yd4-python3-3.8.5/lib/python3.8/os.py", line 223, in makedirs
mkdir(name, mode)
OSError: [Errno 30] Read-only file system: '/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/datasets/squad'
```
Do you have any suggested workaround for this issue?
Perhaps overriding the default value for `force_local_path` of `prepare_module`? | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/554/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/554/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/553 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/553/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/553/comments | https://api.github.com/repos/huggingface/datasets/issues/553/events | https://github.com/huggingface/datasets/pull/553 | 690,143,182 | MDExOlB1bGxSZXF1ZXN0NDc3MDgxNTg2 | 553 | [Fix GitHub Actions] test adding tmate | {'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/thomwolf', 'id': 7353373, 'login': 'thomwolf', 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/thomwolf'} | [] | closed | false | null | [] | null | [] | 2020-09-01 13:28:03+00:00 | 2021-05-05 18:24:38+00:00 | 2020-09-03 09:01:13+00:00 | MEMBER | null | null | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/553/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/553/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/553.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/553', 'merged_at': None, 'patch_url': 'https://github.com/huggingface/datasets/pull/553.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/553'} | true |
https://api.github.com/repos/huggingface/datasets/issues/552 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/552/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/552/comments | https://api.github.com/repos/huggingface/datasets/issues/552/events | https://github.com/huggingface/datasets/pull/552 | 690,079,429 | MDExOlB1bGxSZXF1ZXN0NDc3MDI4MzMx | 552 | Add multiprocessing | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | ['Logging looks like\r\n\r\n```\r\nDone writing 21900 indices in 3854400 bytes .\r\nProcess #0 will write at playground/tmp_00000_of_00004.arrow\r\nDone writing 21900 indices in 3854400 bytes .\r\nProcess #1 will write at playground/tmp_00001_of_00004.arrow\r\nDone writing 21900 indices in 3854400 bytes .\r\nProcess #2 will write at playground/tmp_00002_of_00004.arrow\r\nDone writing 21899 indices in 3854224 bytes .\r\nProcess #3 will write at playground/tmp_00003_of_00004.arrow\r\nSpawning 4 processes\r\n#3: 100%|████████████████████████████████████████████████| 21899/21899 [00:02<00:00, 8027.41ex/s]\r\n#0: 100%|████████████████████████████████████████████████| 21900/21900 [00:02<00:00, 7982.87ex/s]\r\n#1: 100%|████████████████████████████████████████████████| 21900/21900 [00:02<00:00, 7923.89ex/s]\r\n#2: 100%|████████████████████████████████████████████████| 21900/21900 [00:02<00:00, 7920.04ex/s]\r\nConcatenating 4 shards from multiprocessing\r\n```'
'I added tests and improved logging.\r\nBoth `map` and `filter` support multiprocessing'
'A bit strange that the benchmarks on map/filter are worth than `master`.\r\n(maybe because they are not done on the same machine)'
'The benchmark also got worse in other PRs (see [here](https://github.com/huggingface/nlp/pull/550#commitcomment-41931609) for example, where we have 16sec for `map fast-tokenizer batched` and 18 sec for `map identity`)'
'Hi,\r\n\r\nwhen I use the multiprocessing in ```.map```:\r\n```\r\ndataset = load_dataset("text", data_files=file_path, split="train")\r\ndataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True, num_proc=16)\r\ndataset.set_format(type=\'torch\', columns=[\'input_ids\'])\r\n```\r\nI get the following error:\r\n```\r\nTraceback (most recent call last):\r\n File "src/run.py", line 373, in <module>\r\n main()\r\n File "src/run.py", line 295, in main\r\n get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None\r\n File "src/run.py", line 153, in get_dataset\r\n dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,\r\n File "/root/miniconda3/envs/py3.8/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1287, in map\r\n transformed_shards = [r.get() for r in results]\r\n File "/root/miniconda3/envs/py3.8/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1287, in <listcomp>\r\n transformed_shards = [r.get() for r in results]\r\n File "/root/miniconda3/envs/py3.8/lib/python3.8/multiprocessing/pool.py", line 771, in get\r\n raise self._value\r\n put(task)\r\n File "/root/miniconda3/envs/py3.8/lib/python3.8/multiprocessing/connection.py", line 206, in send\r\n self._send_bytes(_ForkingPickler.dumps(obj))\r\n File "/root/miniconda3/envs/py3.8/lib/python3.8/multiprocessing/reduction.py", line 51, in dumps\r\n cls(buf, protocol).dump(obj)\r\nAttributeError: Can\'t pickle local object \'get_dataset.<locals>.<lambda>\'\r\n```\r\nI think you should use [pathos](https://github.com/uqfoundation/pathos) to pickle the lambda function and some others!\r\nI change the 30 line of src/datasets/arrow_dataset.py as following:\r\n```\r\n# 30 line: from multiprocessing import Pool, RLock\r\nimport pathos\r\nfrom pathos.multiprocessing import Pool\r\nfrom multiprocessing import RLock\r\n```\r\nand it works!'
"That's very cool indeed !\r\nShall we condiser adding this dependency @thomwolf ?"
"We already use `dill` so that's definitely a very interesting option indeed!"
'it gets stuck on debian 9 when num_proc > 1\r\n'
"Are you using a tokenizer ?\r\nDid you try to set `TOKENIZERS_PARALLELISM=false` ?\r\n\r\nFeel free to discuss it in #620 , we're discussing this issue"
'I set `TOKENIZERS_PARALLELISM=false`. Just the warning went away. The program was still stuck\r\n'] | 2020-09-01 11:56:17+00:00 | 2020-09-22 15:11:56+00:00 | 2020-09-02 10:01:25+00:00 | MEMBER | null | Adding multiprocessing to `.map`
It works in 3 steps:
- shard the dataset in `num_proc` shards
- spawn one process per shard and call `map` on them
- concatenate the resulting datasets
Example of usage:
```python
from nlp import load_dataset
dataset = load_dataset("squad", split="train")
def function(x):
return {"lowered": x.lower()}
processed = d.map(
function,
input_columns=["context"],
num_proc=4,
cache_file_name="playground/tmp.arrow",
load_from_cache_file=False
)
```
Here it writes 4 files depending on the process rank:
- `playground/tmp_00000_of_00004.arrow`
- `playground/tmp_00001_of_00004.arrow`
- `playground/tmp_00002_of_00004.arrow`
- `playground/tmp_00003_of_00004.arrow`
The suffix format can be specified by the user.
If the `cache_file_name` is not specified, it writes into separated files depending on the fingerprint, as usual.
I still need to:
- write tests for this
- try to improve the logging (currently it shows 4 progress bars, but if one finishes before the others, then the following messages are written over the progress bars)
| {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/552/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/552/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/552.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/552', 'merged_at': '2020-09-02T10:01:25Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/552.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/552'} | true |
https://api.github.com/repos/huggingface/datasets/issues/551 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/551/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/551/comments | https://api.github.com/repos/huggingface/datasets/issues/551/events | https://github.com/huggingface/datasets/pull/551 | 690,034,762 | MDExOlB1bGxSZXF1ZXN0NDc2OTkwNjAw | 551 | added HANS dataset | {'avatar_url': 'https://avatars.githubusercontent.com/u/26709476?v=4', 'events_url': 'https://api.github.com/users/TevenLeScao/events{/privacy}', 'followers_url': 'https://api.github.com/users/TevenLeScao/followers', 'following_url': 'https://api.github.com/users/TevenLeScao/following{/other_user}', 'gists_url': 'https://api.github.com/users/TevenLeScao/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/TevenLeScao', 'id': 26709476, 'login': 'TevenLeScao', 'node_id': 'MDQ6VXNlcjI2NzA5NDc2', 'organizations_url': 'https://api.github.com/users/TevenLeScao/orgs', 'received_events_url': 'https://api.github.com/users/TevenLeScao/received_events', 'repos_url': 'https://api.github.com/users/TevenLeScao/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/TevenLeScao/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/TevenLeScao'} | [] | closed | false | null | [] | null | [] | 2020-09-01 10:42:02+00:00 | 2020-09-01 12:17:10+00:00 | 2020-09-01 12:17:10+00:00 | CONTRIBUTOR | null | Adds the [HANS](https://github.com/tommccoy1/hans) dataset to evaluate NLI systems. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/551/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/551/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/551.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/551', 'merged_at': '2020-09-01T12:17:10Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/551.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/551'} | true |
https://api.github.com/repos/huggingface/datasets/issues/550 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/550/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/550/comments | https://api.github.com/repos/huggingface/datasets/issues/550/events | https://github.com/huggingface/datasets/pull/550 | 689,775,914 | MDExOlB1bGxSZXF1ZXN0NDc2NzgyNDY1 | 550 | [BUGFIX] Solving mismatched checksum issue for the LinCE dataset (#539) | {'avatar_url': 'https://avatars.githubusercontent.com/u/5833357?v=4', 'events_url': 'https://api.github.com/users/gaguilar/events{/privacy}', 'followers_url': 'https://api.github.com/users/gaguilar/followers', 'following_url': 'https://api.github.com/users/gaguilar/following{/other_user}', 'gists_url': 'https://api.github.com/users/gaguilar/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/gaguilar', 'id': 5833357, 'login': 'gaguilar', 'node_id': 'MDQ6VXNlcjU4MzMzNTc=', 'organizations_url': 'https://api.github.com/users/gaguilar/orgs', 'received_events_url': 'https://api.github.com/users/gaguilar/received_events', 'repos_url': 'https://api.github.com/users/gaguilar/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/gaguilar/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/gaguilar/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/gaguilar'} | [] | closed | false | null | [] | null | ['Thanks a lot for that!\r\nThe line you are mentioning is a bug indeed, do you mind fixing it at the same time?'
'No worries! \r\n\r\nI pushed right away the fix, but then I realized that the master branch already had it, so I ended up merging the master branch with lince locally and then overwriting the previous commit in origin/lince. Hopefully, this is not too messy :)\r\n'] | 2020-09-01 03:27:03+00:00 | 2020-09-03 09:06:01+00:00 | 2020-09-03 09:06:01+00:00 | CONTRIBUTOR | null | Hi,
I have added the updated `dataset_infos.json` file for the LinCE benchmark. This update is to fix the mismatched checksum bug #539 for one of the datasets in the LinCE benchmark. To update the file, I run this command from the nlp root directory:
```
python nlp-cli test ./datasets/lince --save_infos --all_configs
```
**NOTE**: I needed to change [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/commands/dummy_data.py#L8) from: `from .utils.logging import get_logger` to `from nlp.utils.logging import get_logger`, otherwise the script was not able to import `get_logger`. However, I did not include that in this PR since that could have been just my environment (and another PR could be fixing this already if it is actually an issue). | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/550/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/550/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/550.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/550', 'merged_at': '2020-09-03T09:06:01Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/550.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/550'} | true |
https://api.github.com/repos/huggingface/datasets/issues/549 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/549/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/549/comments | https://api.github.com/repos/huggingface/datasets/issues/549/events | https://github.com/huggingface/datasets/pull/549 | 689,766,465 | MDExOlB1bGxSZXF1ZXN0NDc2Nzc0OTI1 | 549 | Fix bleurt logging import | {'avatar_url': 'https://avatars.githubusercontent.com/u/2238344?v=4', 'events_url': 'https://api.github.com/users/jbragg/events{/privacy}', 'followers_url': 'https://api.github.com/users/jbragg/followers', 'following_url': 'https://api.github.com/users/jbragg/following{/other_user}', 'gists_url': 'https://api.github.com/users/jbragg/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/jbragg', 'id': 2238344, 'login': 'jbragg', 'node_id': 'MDQ6VXNlcjIyMzgzNDQ=', 'organizations_url': 'https://api.github.com/users/jbragg/orgs', 'received_events_url': 'https://api.github.com/users/jbragg/received_events', 'repos_url': 'https://api.github.com/users/jbragg/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/jbragg/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jbragg/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/jbragg'} | [] | closed | false | null | [] | null | ['That’s a good point that we started to discuss internally as well. We should pin the dataset en metrics code by default indeed.\r\nLet’s update this in the coming release.'
'Ok closed this with #567 and we are working on a more general solution to pin dataset version in #562 (should be in the coming release).'] | 2020-09-01 03:01:25+00:00 | 2020-09-03 18:04:46+00:00 | 2020-09-03 09:04:20+00:00 | CONTRIBUTOR | null | Bleurt started throwing an error in some code we have.
This looks like the fix but...
It's also unnerving that even a prebuilt docker image with pinned versions can be working 1 day and then fail the next (especially for production systems).
Any way for us to pin your metrics code so that they are guaranteed not to to change and possibly fail on repository changes?
Thanks (and also for your continued work on the lib...) | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/549/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/549/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/549.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/549', 'merged_at': None, 'patch_url': 'https://github.com/huggingface/datasets/pull/549.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/549'} | true |
https://api.github.com/repos/huggingface/datasets/issues/548 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/548/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/548/comments | https://api.github.com/repos/huggingface/datasets/issues/548/events | https://github.com/huggingface/datasets/pull/548 | 689,285,996 | MDExOlB1bGxSZXF1ZXN0NDc2MzYzMjU1 | 548 | [Breaking] Switch text loading to multi-threaded PyArrow loading | {'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/thomwolf', 'id': 7353373, 'login': 'thomwolf', 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/thomwolf'} | [] | closed | false | null | [] | null | ['Awesome !\r\nAlso I was wondering if we should try to make the hashing of the `data_files` faster (it is used to build the cache directory of datasets like `text` or `json`). Right now it reads each file and hashes all of its data. We could simply hash the path and some metadata including the `time last modified` tag no ? Apparently we can get this tag with `os.path.getmtime(path)`'
'I just rebased from master to include the hashing changes from #573 '
'I think this is ready to merge, no?' "Indeed it's ready to merge :)"
'Ok added the breaking change info and we can merge indeed.\r\n'] | 2020-08-31 15:15:41+00:00 | 2020-09-08 10:19:58+00:00 | 2020-09-08 10:19:57+00:00 | MEMBER | null | Test if we can get better performances for large-scale text datasets by using multi-threaded text file loading based on Apache Arrow multi-threaded CSV loader.
If it works ok, it would fix #546.
**Breaking change**:
The text lines now do not include final line-breaks anymore. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/548/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/548/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/548.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/548', 'merged_at': '2020-09-08T10:19:57Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/548.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/548'} | true |
https://api.github.com/repos/huggingface/datasets/issues/547 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/547/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/547/comments | https://api.github.com/repos/huggingface/datasets/issues/547/events | https://github.com/huggingface/datasets/pull/547 | 689,268,589 | MDExOlB1bGxSZXF1ZXN0NDc2MzQ4OTk5 | 547 | [Distributed] Making loading distributed datasets a bit safer | {'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/thomwolf', 'id': 7353373, 'login': 'thomwolf', 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/thomwolf'} | [] | closed | false | null | [] | null | [] | 2020-08-31 14:51:34+00:00 | 2020-08-31 15:16:30+00:00 | 2020-08-31 15:16:29+00:00 | MEMBER | null | Add some file-locks during dataset loading | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/547/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/547/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/547.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/547', 'merged_at': '2020-08-31T15:16:29Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/547.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/547'} | true |
https://api.github.com/repos/huggingface/datasets/issues/546 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/546/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/546/comments | https://api.github.com/repos/huggingface/datasets/issues/546/events | https://github.com/huggingface/datasets/issues/546 | 689,186,526 | MDU6SXNzdWU2ODkxODY1MjY= | 546 | Very slow data loading on large dataset | {'avatar_url': 'https://avatars.githubusercontent.com/u/6087313?v=4', 'events_url': 'https://api.github.com/users/agemagician/events{/privacy}', 'followers_url': 'https://api.github.com/users/agemagician/followers', 'following_url': 'https://api.github.com/users/agemagician/following{/other_user}', 'gists_url': 'https://api.github.com/users/agemagician/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/agemagician', 'id': 6087313, 'login': 'agemagician', 'node_id': 'MDQ6VXNlcjYwODczMTM=', 'organizations_url': 'https://api.github.com/users/agemagician/orgs', 'received_events_url': 'https://api.github.com/users/agemagician/received_events', 'repos_url': 'https://api.github.com/users/agemagician/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/agemagician/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/agemagician/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/agemagician'} | [] | closed | false | null | [] | null | ["When you load a text file for the first time with `nlp`, the file is converted into Apache Arrow format. Arrow allows to use memory-mapping, which means that you can load an arbitrary large dataset.\r\n\r\nNote that as soon as the conversion has been done once, the next time you'll load the dataset it will be much faster.\r\n\r\nHowever for a 1TB dataset, the conversion can indeed take time. You could try to load parts of it in parallel, and then use `nlp.concatenate_datasets` to get your full dataset."
'Humm, we can give a look at these large scale datasets indeed.\r\n\r\nDo you mind sharing a few stats on your dataset so I can try to test on a similar one?\r\n\r\nIn particular some orders of magnitudes for the number of files, number of lines per files, line lengths.'
'@lhoestq Yes, I understand that the first time requires more time. The concatenate_datasets seems to be a workaround, but I believe a multi-processing method should be integrated into load_dataset to make it easier and more efficient for users.\r\n\r\n@thomwolf Sure, here are the statistics:\r\nNumber of lines: 4.2 Billion\r\nNumber of files: 6K\r\nNumber of tokens: 800 Billion\r\nThe number of lines is distributed equally across these 6k files.\r\nThe line length varies between 100 tokens to 40k tokens.\r\n'
'@agemagician you can give a try at a multithreaded version if you want (currently on the #548).\r\n\r\nTo test it, you just need to copy the new `text` processing script which is [here](https://github.com/huggingface/nlp/blob/07d92a82b7594498ff702f3cca55c074e2052257/datasets/text/text.py) somewhere on your drive and give it\'s local path instead of `text` to `load_dataset`. E.g. in your example:\r\n```python\r\ntrain_files = glob.glob("xxx/*.txt",recursive=True)\r\nrandom.shuffle(train_files)\r\n\r\nprint(train_files)\r\n\r\ndataset = nlp.load_dataset(\'./datasets/text.py\', # path to where you\'ve dowloaded the multi-threaded text loading script\r\n data_files=train_files,\r\n name="customDataset",\r\n version="1.0.0",\r\n cache_dir="xxx/nlp")\r\n```'
'I have already generated the dataset, but now I tried to reload it and it is still very slow.\r\n\r\nI also have installed your commit and it is slow, even after the dataset was already generated.\r\n`pip install git+https://github.com/huggingface/nlp.git@07d92a82b7594498ff702f3cca55c074e2052257`\r\n\r\nIt uses only a single thread.\r\n\r\nDid I miss something ?'
'As mentioned in #548 , each time you call `load_dataset` with `data_files=`, they are hashed to get the cache directory name. Hashing can be too slow with 1TB of data. I feel like we should have a faster way of getting a hash that identifies the input data files'
'I believe this is really a very important feature, otherwise, we will still have the issue of too slow loading problems even if the data cache generation is fast.'
"Hmm ok then maybe it's the hashing step indeed.\r\n\r\nLet's see if we can improve this as well.\r\n\r\n(you will very likely have to regenerate your dataset if we change this part of the lib though since I expect modifications on this part of the lib to results in new hashes)"
"Also, @agemagician you have to follow the step I indicate in my previous message [here](https://github.com/huggingface/nlp/issues/546#issuecomment-684648927) to use the new text loading script.\r\n\r\nJust doing `pip install git+https://github.com/huggingface/nlp.git@07d92a82b7594498ff702f3cca55c074e2052257` like you did won't use the new script (they are not inside the library but hosted on our hub)."
'No problem, I will regenerate it. This will make us see if we solved both issues and now both the data generation step, as well as the hashing step, is fast.'
'Any news for the hashing ?' "I'm working on it today :)"
"Ok so now the text files won't be hashed.\r\n\r\nI also updated #548 to include this change.\r\nLet us know if it helps @agemagician :)"
'Perfect thanks for your amazing work.'
'Right now, for caching 18Gb data, it is taking 1 hour 10 minute. Is that proper expected time? @lhoestq @agemagician \r\nIn this rate (assuming large file will caching at the same rate) caching full mC4 (27TB) requires a month (~26 days). \r\n'
'Hi ! Currently it is that slow because we haven\'t implemented parallelism for the dataset generation yet.\r\nThough we will definitely work on this :)\r\n\r\nFor now I\'d recommend loading the dataset shard by shard in parallel, and then concatenate them:\r\n```python\r\n# in one process, load first 100 files for english\r\nshard1 = load_dataset("allenai/c4", data_files="multilingual/c4-en.tfrecord-000**.json.gz")\r\n# in another process load next 100 files for english\r\nshard2 = load_dataset("allenai/c4", data_files="multilingual/c4-en.tfrecord-001**.json.gz")\r\n\r\n# finally\r\nconcatenate_datasets([shard1, shard2, ...])'
'Thanks for the help..!!!'
'Sorry to write on a closed issue but, has there been any progress on parallelizing the `load_dataset` function?'
'Hi ! No but this is in our plans (probably a few weeks)'
"I'm literally crying waiting for the trainer to restart from checkpoint. It's getting stuck at `get_train_dataloader` and I think this is to do with the same issue... has there been any progress on this?"
"> I'm literally crying waiting for the trainer to restart from checkpoint. It's getting stuck at get_train_dataloader and I think this is to do with the same issue...\r\n\r\nOnce the dataset is cached once, it's not regenerated again. Your issue seems different"
"hmmm, yes. I'll come back with details on this, fairly easy to reproduce. Takes about 30 minutes to get from checkpoint loading to starting training..."] | 2020-08-31 12:57:23+00:00 | 2022-06-17 17:06:51+00:00 | 2020-09-08 10:19:57+00:00 | NONE | null | I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data.
It has been 8 hours and still, it is on the loading steps.
It does work when the text dataset size is small about 1 GB, but it doesn't scale.
It also uses a single thread during the data loading step.
```
train_files = glob.glob("xxx/*.txt",recursive=True)
random.shuffle(train_files)
print(train_files)
dataset = nlp.load_dataset('text',
data_files=train_files,
name="customDataset",
version="1.0.0",
cache_dir="xxx/nlp")
```
Is there something that I am missing ? | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/546/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/546/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/545 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/545/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/545/comments | https://api.github.com/repos/huggingface/datasets/issues/545/events | https://github.com/huggingface/datasets/issues/545 | 689,138,878 | MDU6SXNzdWU2ODkxMzg4Nzg= | 545 | New release coming up for this library | {'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/thomwolf', 'id': 7353373, 'login': 'thomwolf', 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/thomwolf'} | [] | closed | false | null | [] | null | ['Update: release is planed mid-next week.'] | 2020-08-31 11:37:38+00:00 | 2021-01-13 10:59:04+00:00 | 2021-01-13 10:59:04+00:00 | MEMBER | null | Hi all,
A few words on the roadmap for this library.
The next release will be a big one and is planed at the end of this week.
In addition to the support for indexed datasets (useful for non-parametric models like REALM, RAG, DPR, knn-LM and many other fast dataset retrieval technics), it will:
- have support for multi-modal datasets
- include various significant improvements on speed for standard processing (map, shuffling, ...)
- have a better support for metrics (better caching, and a robust API) and a bigger focus on reproductibility
- change the name to the final name (voted by the community): `datasets`
- be the 1.0.0 release as we think the API will be mostly stabilized from now on | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 4, 'laugh': 0, 'rocket': 0, 'total_count': 4, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/545/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/545/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/544 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/544/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/544/comments | https://api.github.com/repos/huggingface/datasets/issues/544/events | https://github.com/huggingface/datasets/pull/544 | 689,062,519 | MDExOlB1bGxSZXF1ZXN0NDc2MTc4MDM2 | 544 | [Distributed] Fix load_dataset error when multiprocessing + add test | {'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/thomwolf', 'id': 7353373, 'login': 'thomwolf', 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/thomwolf'} | [] | closed | false | null | [] | null | [] | 2020-08-31 09:30:10+00:00 | 2020-08-31 11:15:11+00:00 | 2020-08-31 11:15:10+00:00 | MEMBER | null | Fix #543 + add test | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/544/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/544/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/544.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/544', 'merged_at': '2020-08-31T11:15:10Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/544.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/544'} | true |
https://api.github.com/repos/huggingface/datasets/issues/543 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/543/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/543/comments | https://api.github.com/repos/huggingface/datasets/issues/543/events | https://github.com/huggingface/datasets/issues/543 | 688,644,407 | MDU6SXNzdWU2ODg2NDQ0MDc= | 543 | nlp.load_dataset is not safe for multi processes when loading from local files | {'avatar_url': 'https://avatars.githubusercontent.com/u/55288513?v=4', 'events_url': 'https://api.github.com/users/luyug/events{/privacy}', 'followers_url': 'https://api.github.com/users/luyug/followers', 'following_url': 'https://api.github.com/users/luyug/following{/other_user}', 'gists_url': 'https://api.github.com/users/luyug/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/luyug', 'id': 55288513, 'login': 'luyug', 'node_id': 'MDQ6VXNlcjU1Mjg4NTEz', 'organizations_url': 'https://api.github.com/users/luyug/orgs', 'received_events_url': 'https://api.github.com/users/luyug/received_events', 'repos_url': 'https://api.github.com/users/luyug/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/luyug/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/luyug/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/luyug'} | [] | closed | false | null | [] | null | ["I'll take a look!"] | 2020-08-30 03:20:34+00:00 | 2020-08-31 11:15:10+00:00 | 2020-08-31 11:15:10+00:00 | NONE | null | Loading from local files, e.g., `dataset = nlp.load_dataset('csv', data_files=['file_1.csv', 'file_2.csv'])`
concurrently from multiple processes, will raise `FileExistsError` from builder's line 430, https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/src/nlp/builder.py#L423-L438
Likely because multiple processes step into download_and_prepare, https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/src/nlp/load.py#L550-L554
This can happen when launching distributed training with commands like `python -m torch.distributed.launch --nproc_per_node 4` on a new collection of files never loaded before.
I can create a PR that puts in some file locks. It would be helpful if I can be informed of the convention for naming and placement of the lock. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/543/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/543/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/542 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/542/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/542/comments | https://api.github.com/repos/huggingface/datasets/issues/542/events | https://github.com/huggingface/datasets/pull/542 | 688,555,036 | MDExOlB1bGxSZXF1ZXN0NDc1NzkyNTY0 | 542 | Add TensorFlow example | {'avatar_url': 'https://avatars.githubusercontent.com/u/959590?v=4', 'events_url': 'https://api.github.com/users/jplu/events{/privacy}', 'followers_url': 'https://api.github.com/users/jplu/followers', 'following_url': 'https://api.github.com/users/jplu/following{/other_user}', 'gists_url': 'https://api.github.com/users/jplu/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/jplu', 'id': 959590, 'login': 'jplu', 'node_id': 'MDQ6VXNlcjk1OTU5MA==', 'organizations_url': 'https://api.github.com/users/jplu/orgs', 'received_events_url': 'https://api.github.com/users/jplu/received_events', 'repos_url': 'https://api.github.com/users/jplu/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/jplu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jplu/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/jplu'} | [] | closed | false | null | [] | null | [] | 2020-08-29 15:39:27+00:00 | 2020-08-31 09:49:20+00:00 | 2020-08-31 09:49:19+00:00 | CONTRIBUTOR | null | Update the Quick Tour documentation in order to add the TensorFlow equivalent source code for the classification example. Now it is possible to select either the code in PyTorch or in TensorFlow in the Quick tour. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/542/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/542/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/542.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/542', 'merged_at': '2020-08-31T09:49:19Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/542.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/542'} | true |
https://api.github.com/repos/huggingface/datasets/issues/541 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/541/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/541/comments | https://api.github.com/repos/huggingface/datasets/issues/541/events | https://github.com/huggingface/datasets/issues/541 | 688,521,224 | MDU6SXNzdWU2ODg1MjEyMjQ= | 541 | Best practices for training tokenizers with nlp | {'avatar_url': 'https://avatars.githubusercontent.com/u/11806234?v=4', 'events_url': 'https://api.github.com/users/moskomule/events{/privacy}', 'followers_url': 'https://api.github.com/users/moskomule/followers', 'following_url': 'https://api.github.com/users/moskomule/following{/other_user}', 'gists_url': 'https://api.github.com/users/moskomule/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/moskomule', 'id': 11806234, 'login': 'moskomule', 'node_id': 'MDQ6VXNlcjExODA2MjM0', 'organizations_url': 'https://api.github.com/users/moskomule/orgs', 'received_events_url': 'https://api.github.com/users/moskomule/received_events', 'repos_url': 'https://api.github.com/users/moskomule/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/moskomule/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/moskomule/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/moskomule'} | [] | closed | false | null | [] | null | ['Docs that explain how to train a tokenizer with `datasets` are available here: https://huggingface.co/docs/tokenizers/training_from_memory#using-the-datasets-library'] | 2020-08-29 12:06:49+00:00 | 2022-10-04 17:28:04+00:00 | 2022-10-04 17:28:04+00:00 | NONE | null | Hi, thank you for developing this library.
What do you think are the best practices for training tokenizers using `nlp`? In the document and examples, I could only find pre-trained tokenizers used. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/541/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/541/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/540 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/540/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/540/comments | https://api.github.com/repos/huggingface/datasets/issues/540/events | https://github.com/huggingface/datasets/pull/540 | 688,475,884 | MDExOlB1bGxSZXF1ZXN0NDc1NzMzNzMz | 540 | [BUGFIX] Fix Race Dataset Checksum bug | {'avatar_url': 'https://avatars.githubusercontent.com/u/6608232?v=4', 'events_url': 'https://api.github.com/users/abarbosa94/events{/privacy}', 'followers_url': 'https://api.github.com/users/abarbosa94/followers', 'following_url': 'https://api.github.com/users/abarbosa94/following{/other_user}', 'gists_url': 'https://api.github.com/users/abarbosa94/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/abarbosa94', 'id': 6608232, 'login': 'abarbosa94', 'node_id': 'MDQ6VXNlcjY2MDgyMzI=', 'organizations_url': 'https://api.github.com/users/abarbosa94/orgs', 'received_events_url': 'https://api.github.com/users/abarbosa94/received_events', 'repos_url': 'https://api.github.com/users/abarbosa94/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/abarbosa94/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/abarbosa94/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/abarbosa94'} | [] | closed | false | null | [] | null | ["I'm not sure this would fix #537 .\r\nHowever your point about the missing `middle` data is right and we probably want to include these data as well.\r\nDo you think it would we worth having different configurations for this dataset for users who want to only load part of it (`high school` or `middle` or `all`) ?"
'This has fixed #537 at least on my machine hahaha.\r\n\r\nNice point! I think it would totally worth it :) What the best implementation approach would you suggest?\r\n\r\nWould it be possible to have `high school`, `middle` and `all` inside each portion of `train`, `validation` and `test`? Would this make sense?'
'I think we could have one dataset configuration for `high school`, one for `middle` and one for `all`.\r\nYou just need to add\r\n```python\r\n BUILDER_CONFIGS = [\r\n nlp.BuilderConfig(\r\n name="high school",\r\n description="insert description here",\r\n ),\r\n nlp.BuilderConfig(\r\n name="middle",\r\n description="insert description here",\r\n ),\r\n nlp.BuilderConfig(\r\n name="all",\r\n description="insert description here",\r\n ),\r\n ]\r\n```\r\nas a class attribute for the `Race` class.\r\n\r\nThen in `generate_examples` you can check the value of `self.config.name` and choose which files to include when generating examples.\r\n\r\nYou can check [mlsum](https://github.com/huggingface/nlp/blob/master/datasets/mlsum/mlsum.py) for example if you want to see how it done in general, it\'s a dataset that has five configurations, and each config has train/val/test splits.'
'Hi @lhoestq sorry for the delay in addressing your comments. Thanks for your assistance :)\r\n\r\nYou were correct as well, as I was using the script without the `datasets/race/dataset_infos.json` file, it did not verify the checksum. I already fix it as well :)\r\n\r\nI managed to get everything running smoothly by now. Please let me know if you think that I could improve my solution'] | 2020-08-29 07:00:10+00:00 | 2020-09-18 11:42:20+00:00 | 2020-09-18 11:42:20+00:00 | CONTRIBUTOR | null | In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :)
Moreover, I have added some descriptions. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/540/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/540/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/540.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/540', 'merged_at': '2020-09-18T11:42:20Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/540.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/540'} | true |
https://api.github.com/repos/huggingface/datasets/issues/539 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/539/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/539/comments | https://api.github.com/repos/huggingface/datasets/issues/539/events | https://github.com/huggingface/datasets/issues/539 | 688,323,602 | MDU6SXNzdWU2ODgzMjM2MDI= | 539 | [Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data | {'avatar_url': 'https://avatars.githubusercontent.com/u/5833357?v=4', 'events_url': 'https://api.github.com/users/gaguilar/events{/privacy}', 'followers_url': 'https://api.github.com/users/gaguilar/followers', 'following_url': 'https://api.github.com/users/gaguilar/following{/other_user}', 'gists_url': 'https://api.github.com/users/gaguilar/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/gaguilar', 'id': 5833357, 'login': 'gaguilar', 'node_id': 'MDQ6VXNlcjU4MzMzNTc=', 'organizations_url': 'https://api.github.com/users/gaguilar/orgs', 'received_events_url': 'https://api.github.com/users/gaguilar/received_events', 'repos_url': 'https://api.github.com/users/gaguilar/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/gaguilar/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/gaguilar/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/gaguilar'} | [] | closed | false | null | [] | null | ["Hi @gaguilar \r\n\r\nIf you want to take care of this, it very simple, you just need to regenerate the `dataset_infos.json` file as indicated [in the doc](https://huggingface.co/nlp/share_dataset.html#adding-metadata) by [installing from source](https://huggingface.co/nlp/installation.html#installing-from-source) and running the following command from the root of the repo:\r\n```bash\r\npython nlp-cli test ./datasets/lince --save_infos --all_configs\r\n```\r\nAnd then you can open a pull-request with the updated json file.\r\n\r\nOtherwise we'll do it sometime this week."
'Hi @thomwolf \r\n\r\nThanks for the details! I just created a PR with the updated `dataset_infos.json` file (#550).'
'Thanks for updating the json file. Closing this one'] | 2020-08-28 19:55:51+00:00 | 2020-09-03 16:34:02+00:00 | 2020-09-03 16:34:01+00:00 | CONTRIBUTOR | null | Hi,
There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset.
How can I update the checksum of the library to solve this issue? The error is below and it also appears in the [nlp viewer](https://huggingface.co/nlp/viewer/?dataset=lince&config=lid_msaea):
```python
import nlp
nlp.load_dataset('lince', 'lid_msaea')
```
Output:
```
NonMatchingChecksumError: ['https://ritual.uh.edu/lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/lid_msaea.zip']
Traceback:
File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 196, in <module>
dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None)
File "/home/sasha/streamlit/lib/streamlit/caching.py", line 591, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/streamlit/lib/streamlit/caching.py", line 575, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp-viewer/run.py", line 150, in get
builder_instance.download_and_prepare()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare
download_config.force_download = download_mode == FORCE_REDOWNLOAD
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 469, in _download_and_prepare
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 36, in verify_checksums
raise NonMatchingChecksumError(str(bad_urls))
```
Thank you in advance!
@lhoestq | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/539/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/539/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/538/comments | https://api.github.com/repos/huggingface/datasets/issues/538/events | https://github.com/huggingface/datasets/pull/538 | 688,015,912 | MDExOlB1bGxSZXF1ZXN0NDc1MzU3MjY2 | 538 | [logging] Add centralized logging - Bump-up cache loads to warnings | {'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/thomwolf', 'id': 7353373, 'login': 'thomwolf', 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/thomwolf'} | [] | closed | false | null | [] | null | [] | 2020-08-28 11:42:29+00:00 | 2020-08-31 11:42:51+00:00 | 2020-08-31 11:42:51+00:00 | MEMBER | null | Add a `nlp.logging` module to set the global logging level easily. The verbosity level also controls the tqdm bars (disabled when set higher than INFO).
You can use:
```
nlp.logging.set_verbosity(verbosity: int)
nlp.logging.set_verbosity_info()
nlp.logging.set_verbosity_warning()
nlp.logging.set_verbosity_debug()
nlp.logging.set_verbosity_error()
nlp.logging.get_verbosity() -> int
```
And use the levels:
```
nlp.logging.CRITICAL
nlp.logging.DEBUG
nlp.logging.ERROR
nlp.logging.FATAL
nlp.logging.INFO
nlp.logging.NOTSET
nlp.logging.WARN
nlp.logging.WARNING
``` | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/538/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/538/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/538.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/538', 'merged_at': '2020-08-31T11:42:50Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/538.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/538'} | true |
https://api.github.com/repos/huggingface/datasets/issues/537 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/537/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/537/comments | https://api.github.com/repos/huggingface/datasets/issues/537/events | https://github.com/huggingface/datasets/issues/537 | 687,614,699 | MDU6SXNzdWU2ODc2MTQ2OTk= | 537 | [Dataset] RACE dataset Checksums error | {'avatar_url': 'https://avatars.githubusercontent.com/u/6608232?v=4', 'events_url': 'https://api.github.com/users/abarbosa94/events{/privacy}', 'followers_url': 'https://api.github.com/users/abarbosa94/followers', 'following_url': 'https://api.github.com/users/abarbosa94/following{/other_user}', 'gists_url': 'https://api.github.com/users/abarbosa94/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/abarbosa94', 'id': 6608232, 'login': 'abarbosa94', 'node_id': 'MDQ6VXNlcjY2MDgyMzI=', 'organizations_url': 'https://api.github.com/users/abarbosa94/orgs', 'received_events_url': 'https://api.github.com/users/abarbosa94/received_events', 'repos_url': 'https://api.github.com/users/abarbosa94/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/abarbosa94/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/abarbosa94/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/abarbosa94'} | [{'color': '2edb81', 'default': False, 'description': 'A bug in a dataset script provided in the library', 'id': 2067388877, 'name': 'dataset bug', 'node_id': 'MDU6TGFiZWwyMDY3Mzg4ODc3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug'}] | closed | false | null | [] | null | ['`NonMatchingChecksumError` means that the checksum of the downloaded file is not the expected one.\r\nEither the file you downloaded was corrupted along the way, or the host updated the file.\r\nCould you try to clear your cache and run `load_dataset` again ? If the error is still there, it means that there was an update in the data, and we may have to update the expected checksum value.'
'I just cleared the cache an run it again. The error persists ):\r\n\r\n```\r\n nlp (master) $ rm -rf /Users/abarbosa/.cache/huggingface/\r\n nlp (master) $ python\r\nPython 3.8.5 (default, Aug 5 2020, 03:39:04)\r\n[Clang 10.0.0 ] :: Anaconda, Inc. on darwin\r\nType "help", "copyright", "credits" or "license" for more information.\r\n>>> import nlp\r\n>>> dataset = nlp.load_dataset("race")\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.39k/4.39k [00:00<00:00, 661kB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.81k/1.81k [00:00<00:00, 644kB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset race/default (download: 84.52 MiB, generated: 132.61 MiB, post-processed: Unknown size, total: 217.13 MiB) to /Users/abarbosa/.cache/huggingface/datasets/race/default/0.1.0/5461327f1a83549ca0d845a3159c806d2baf4f8d0d8f7d657157ce7cdf3899c2...\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25.4M/25.4M [01:03<00:00, 401kB/s]\r\nTraceback (most recent call last):\r\n File "<stdin>", line 1, in <module>\r\n File "/Users/abarbosa/Documents/nlp/src/nlp/load.py", line 550, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File "/Users/abarbosa/Documents/nlp/src/nlp/builder.py", line 471, in download_and_prepare\r\n self._download_and_prepare(\r\n File "/Users/abarbosa/Documents/nlp/src/nlp/builder.py", line 530, in _download_and_prepare\r\n verify_checksums(\r\n File "/Users/abarbosa/Documents/nlp/src/nlp/utils/info_utils.py", line 38, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\nnlp.utils.info_utils.NonMatchingChecksumError: Checksums didn\'t match for dataset source files:\r\n[\'http://www.cs.cmu.edu/~glai1/data/race/RACE.tar.gz\']\r\n>>>\r\n```'
'Dealing with the same issue please update the checksum on nlp library end. The data seems to have changed on their end.'
'We have a discussion on this datasets here: https://github.com/huggingface/nlp/pull/540\r\n\r\nFeel free to participate if you have some opinion on the scope of data which should be included in this dataset.'
"At least for me, the file that was downloaded from CMU isn't the complete dataset, but a small subset of it (~25MB vs ~85MB). I've previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikipedia and BookCorpus datasets.\r\n\r\n"
"> At least for me, the file that was downloaded from CMU isn't the complete dataset, but a small subset of it (~25MB vs ~85MB). I've previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikipedia and BookCorpus datasets.\r\n\r\nCould you upload this please?"
'> > At least for me, the file that was downloaded from CMU isn\'t the complete dataset, but a small subset of it (~25MB vs ~85MB). I\'ve previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikipedia and BookCorpus datasets.\r\n> \r\n> Could you upload this please?\r\n\r\nNot sure if I can upload it according to their license ("You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purpose, any portion of the contexts and any portion of derived data.").'
'I managed to fix it in #540 :)'
'Closing since @540 is merged\r\n\r\nThanks again @abarbosa94 '] | 2020-08-27 23:58:16+00:00 | 2020-09-18 12:07:04+00:00 | 2020-09-18 12:07:04+00:00 | CONTRIBUTOR | null | Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-15-8bf7603ce0ed> in <module>
----> 1 dataset = nlp.load_dataset("race")
2 len(dataset["train"]), len(dataset["validation"])
~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
546
547 # Download and prepare data
--> 548 builder_instance.download_and_prepare(
549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
550 )
~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
460 logger.info("Dataset not on Hf google storage. Downloading and preparing it from source")
461 if not downloaded_from_gcs:
--> 462 self._download_and_prepare(
463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
464 )
~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
519 # Checksums verification
520 if verify_infos:
--> 521 verify_checksums(
522 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
523 )
~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 38 raise NonMatchingChecksumError(error_msg + str(bad_urls))
39 logger.info("All the checksums matched successfully" + for_verification_name)
40
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['http://www.cs.cmu.edu/~glai1/data/race/RACE.tar.gz']
``` | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/537/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/537/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/536 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/536/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/536/comments | https://api.github.com/repos/huggingface/datasets/issues/536/events | https://github.com/huggingface/datasets/pull/536 | 687,378,332 | MDExOlB1bGxSZXF1ZXN0NDc0ODE0NzY1 | 536 | Fingerprint | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | ['I changed the way I implemented fingerprint updates to use decorator functions.\r\n\r\nI also added a new attribute called `_inplace_history` that stores the in-place history of transforms (like cast_, rename_columns, etc.). This history is useful to replay the changes that were done in-place when unpickling a dataset that is memory mapped from a file.\r\n\r\nLet me know what you think @thomwolf '] | 2020-08-27 16:27:09+00:00 | 2020-08-31 14:20:40+00:00 | 2020-08-31 14:20:39+00:00 | MEMBER | null | This PR is a continuation of #513 , in which many in-place functions were introduced or updated (cast_, flatten_) etc.
However the caching didn't handle these changes. Indeed the caching took into account only the previous cache file name of the table, and not the possible in-place transforms of the table.
To fix that, I added the concept of dataset fingerprint, that is updated after each transform (in place or not), and stored inside the table metadata.
When a dataset is created, an initial fingerprint is computed. If the dataset is memory-mapped, then the fingerprint generator doesn't read the table and only looks at the filename. However if the table is in-memory, then the fingerprint generator reads the content of the table using a batched non-crypto hashing.
I added a utility class to compute hashes of arbitrary python objects in `fingerprint.py` : `Hasher`. The API is close to standard hashing tools (`.update`, `.hexdigest`). It also supports custom hashing functions depending on object types using a registry like pickle. I added a custom hashing function to hash a `pa.Table` in a batched way, and also for `nlp.DatasetInfo` to leverage its json serialization feature.
Note about this PR:
This is a draft PR because #513 needs to be merged first.
The diff that is shown is for branches fingerprint -> indices (and not master, for now) | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/536/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/536/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/536.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/536', 'merged_at': '2020-08-31T14:20:39Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/536.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/536'} | true |
https://api.github.com/repos/huggingface/datasets/issues/535 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/535/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/535/comments | https://api.github.com/repos/huggingface/datasets/issues/535/events | https://github.com/huggingface/datasets/pull/535 | 686,238,315 | MDExOlB1bGxSZXF1ZXN0NDczODM3Njg0 | 535 | Benchmarks | {'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/thomwolf', 'id': 7353373, 'login': 'thomwolf', 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/thomwolf'} | [] | closed | false | null | [] | null | [] | 2020-08-26 11:21:26+00:00 | 2020-08-27 08:40:00+00:00 | 2020-08-27 08:39:59+00:00 | MEMBER | null | Adding some benchmarks with DVC/CML
To add a new tracked benchmark:
- create a new python benchmarking script in `./benchmarks/`. The script can use the utilities in `./benchmarks/utils.py` and should output a JSON file with results in `./benchmarks/results/`.
- add a new pipeline stage in [dvc.yaml](./dvc.yaml) with the name of your new benchmark.
That's it | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/535/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/535/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/535.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/535', 'merged_at': '2020-08-27T08:39:59Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/535.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/535'} | true |
https://api.github.com/repos/huggingface/datasets/issues/534 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/534/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/534/comments | https://api.github.com/repos/huggingface/datasets/issues/534/events | https://github.com/huggingface/datasets/issues/534 | 686,115,912 | MDU6SXNzdWU2ODYxMTU5MTI= | 534 | `list_datasets()` is broken. | {'avatar_url': 'https://avatars.githubusercontent.com/u/314169?v=4', 'events_url': 'https://api.github.com/users/ashutosh-dwivedi-e3502/events{/privacy}', 'followers_url': 'https://api.github.com/users/ashutosh-dwivedi-e3502/followers', 'following_url': 'https://api.github.com/users/ashutosh-dwivedi-e3502/following{/other_user}', 'gists_url': 'https://api.github.com/users/ashutosh-dwivedi-e3502/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/ashutosh-dwivedi-e3502', 'id': 314169, 'login': 'ashutosh-dwivedi-e3502', 'node_id': 'MDQ6VXNlcjMxNDE2OQ==', 'organizations_url': 'https://api.github.com/users/ashutosh-dwivedi-e3502/orgs', 'received_events_url': 'https://api.github.com/users/ashutosh-dwivedi-e3502/received_events', 'repos_url': 'https://api.github.com/users/ashutosh-dwivedi-e3502/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/ashutosh-dwivedi-e3502/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/ashutosh-dwivedi-e3502/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/ashutosh-dwivedi-e3502'} | [] | closed | false | null | [] | null | ['Thanks for reporting !\r\nThis has been fixed in #475 and the fix will be available in the next release'
'What you can do instead to get the list of the datasets is call\r\n\r\n```python\r\nprint([dataset.id for dataset in nlp.list_datasets()])\r\n```'
'Thanks @lhoestq . '] | 2020-08-26 08:19:01+00:00 | 2020-08-27 06:31:11+00:00 | 2020-08-27 06:31:11+00:00 | NONE | null | version = '0.4.0'
`list_datasets()` is broken. It results in the following error :
```
In [3]: nlp.list_datasets()
Out[3]: ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/core/formatters.py in __call__(self, obj)
700 type_pprinters=self.type_printers,
701 deferred_pprinters=self.deferred_printers)
--> 702 printer.pretty(obj)
703 printer.flush()
704 return stream.getvalue()
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in pretty(self, obj)
375 if cls in self.type_pprinters:
376 # printer registered in self.type_pprinters
--> 377 return self.type_pprinters[cls](obj, self, cycle)
378 else:
379 # deferred printer
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in inner(obj, p, cycle)
553 p.text(',')
554 p.breakable()
--> 555 p.pretty(x)
556 if len(obj) == 1 and type(obj) is tuple:
557 # Special case for 1-item tuples.
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in pretty(self, obj)
392 if cls is not object \
393 and callable(cls.__dict__.get('__repr__')):
--> 394 return _repr_pprint(obj, self, cycle)
395
396 return _default_pprint(obj, self, cycle)
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in _repr_pprint(obj, p, cycle)
698 """A pprint that just redirects to the normal repr function."""
699 # Find newlines and replace them with p.break_()
--> 700 output = repr(obj)
701 lines = output.splitlines()
702 with p.group():
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/nlp/hf_api.py in __repr__(self)
110
111 def __repr__(self):
--> 112 single_line_description = self.description.replace("\n", "")
113 return f"nlp.ObjectInfo(id='{self.id}', description='{single_line_description}', files={self.siblings})"
114
AttributeError: 'NoneType' object has no attribute 'replace'
``` | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/534/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/534/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/533 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/533/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/533/comments | https://api.github.com/repos/huggingface/datasets/issues/533/events | https://github.com/huggingface/datasets/pull/533 | 685,585,914 | MDExOlB1bGxSZXF1ZXN0NDczMjg4OTgx | 533 | Fix ArrayXD for pyarrow 0.17.1 by using non fixed length list arrays | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | [] | 2020-08-25 15:32:44+00:00 | 2020-08-26 08:02:24+00:00 | 2020-08-26 08:02:23+00:00 | MEMBER | null | It should fix the CI problems in #513 | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/533/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/533/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/533.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/533', 'merged_at': '2020-08-26T08:02:23Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/533.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/533'} | true |
https://api.github.com/repos/huggingface/datasets/issues/532 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/532/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/532/comments | https://api.github.com/repos/huggingface/datasets/issues/532/events | https://github.com/huggingface/datasets/issues/532 | 685,540,614 | MDU6SXNzdWU2ODU1NDA2MTQ= | 532 | File exists error when used with TPU | {'avatar_url': 'https://avatars.githubusercontent.com/u/20531705?v=4', 'events_url': 'https://api.github.com/users/go-inoue/events{/privacy}', 'followers_url': 'https://api.github.com/users/go-inoue/followers', 'following_url': 'https://api.github.com/users/go-inoue/following{/other_user}', 'gists_url': 'https://api.github.com/users/go-inoue/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/go-inoue', 'id': 20531705, 'login': 'go-inoue', 'node_id': 'MDQ6VXNlcjIwNTMxNzA1', 'organizations_url': 'https://api.github.com/users/go-inoue/orgs', 'received_events_url': 'https://api.github.com/users/go-inoue/received_events', 'repos_url': 'https://api.github.com/users/go-inoue/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/go-inoue/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/go-inoue/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/go-inoue'} | [] | open | false | null | [] | null | ['I am facing probably facing similar issues with \r\n\r\n`wiki40b_en_100_0`'
'Could you try to run `dataset = load_dataset("text", data_files=file_path, split="train")` once before calling the script ?\r\n\r\nIt looks like several processes try to create the dataset in arrow format at the same time. If the dataset is already created it should be fine'
'Thanks! I tested on 328MB text data on `n1-standard-8 (8 vCPUs, 30 GB memory)`. The main script ran without any issue, but it seems to require a huge space in the drive.\r\n\r\nAs suggested, I ran the following script before running the pre-training command with `xla_spawn.py`.\r\n\r\n```python\r\nfrom nlp import load_dataset\r\n\r\nfile_path="your_file_name"\r\nload_dataset("text", data_files=file_path, split="train")\r\n```\r\nThis will create `text-train.arrow` under the default cache directory. Then, I run the script with `xla_spawn.py`. It will load data from the cached file. My understanding is that there\'s no other way but to do this two-step process with the current version (0.4) of `nlp`.\r\n\r\nDuring another caching process that happens in the main script:\r\n\r\n```\r\n08/26/2020 09:19:51 - INFO - nlp.utils.info_utils - All the checksums matched successfully for post processing resources\r\n08/26/2020 09:19:53 - INFO - nlp.arrow_dataset - Caching processed dataset at /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d/cache-f90f341e5308a7469\r\n8d872bcc88f9c0e.arrow\r\n```\r\n\r\n`nlp` generates a temporary file per core, each of which is three times larger than the original text data. If each process is actually writing on the disk, you will need a huge amount of space in your drive. (Maybe I\'m missing something.)\r\n\r\n```\r\n-rw-r--r-- 1 ***** ***** 674 Aug 26 09:19 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 09:19 LICENSE\r\n-rw-r--r-- 1 ***** ***** 332M Aug 26 09:10 text-train.arrow\r\n-rw------- 1 ***** ***** 940M Aug 26 09:31 tmp0k43sazw\r\n-rw------- 1 ***** ***** 940M Aug 26 09:31 tmp7sxs9mj5\r\n-rw------- 1 ***** ***** 939M Aug 26 09:31 tmpbbiqw2vp\r\n-rw------- 1 ***** ***** 937M Aug 26 09:31 tmpjxb5ptyu\r\n-rw------- 1 ***** ***** 933M Aug 26 09:31 tmpk3hkdh0e\r\n-rw------- 1 ***** ***** 944M Aug 26 09:31 tmpnoalwftz\r\n-rw------- 1 ***** ***** 931M Aug 26 09:31 tmpuxdr_dz3\r\n-rw------- 1 ***** ***** 945M Aug 26 09:31 tmpxjyuy6dk\r\n```\r\nAfter the caching process, they seem to be merged into one file.\r\n\r\n```\r\n-rw------- 1 ***** ***** 989M Aug 26 09:32 cache-f90f341e5308a74698d872bcc88f9c0e.arrow\r\n-rw-r--r-- 1 ***** ***** 674 Aug 26 09:19 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 09:19 LICENSE\r\n-rw-r--r-- 1 ***** ***** 332M Aug 26 09:10 text-train.arrow\r\n```'
"Again it looks like every process tries to tokenize the full dataset at the same time.\r\nIf you do the tokenization before calling `xla_spawn.py` once, then each process will then use the tokenized cached file `cache-f90f341e5308a74698d872bcc88f9c0e.arrow` and not recompute it.\r\n\r\nNot sure if there's a better way to do that cc @julien-c @thomwolf "
"I wrote a separate script just for preparing a cached file, including tokenization. Each process did use the tokenized cached file.\r\n\r\nCurrently I'm testing the pipeline on 24GB text data. It took about 1.5 hour to create a cached file on `n1-highmem-16 (16 vCPUs, 104 GB memory)`. I assume loading this cached file in the main script with `xla_spawn.py` won't be an issue (even if there are 8 processes).\r\n\r\n```\r\ntotal 98G\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 26 13:38 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 26 12:24 ..\r\n-rw------- 1 ***** ***** 74G Aug 26 13:38 cache-a7aa04134ba7b1aff5d9710f14a4e334.arrow\r\n-rw-r--r-- 1 ***** ***** 681 Aug 26 12:24 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 12:24 LICENSE\r\n-rw-r--r-- 1 ***** ***** 25G Aug 26 12:24 text-train.arrow\r\n```"
'Yes loading the cached file should be fine from different processes'
"Sorry, I thought it was working, but actually the second call doesn't use the cached file that was generated separately, and it will generate another cache-****.arrorw file with a different name. If I run the training script again (with `xla_spawn.py`), it will use the second cached file, which was generated by the training script itself in the previous run.\r\n\r\n```\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 26 15:35 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 26 15:29 ..\r\n-rw------- 1 ***** ***** 99M Aug 26 15:35 cache-0d77dfce704493dbe63f071eed6a5431.arrow\r\n-rw------- 1 ***** ***** 99M Aug 26 15:29 cache-69633651476e943b93c89ace715f9487.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 26 15:33 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 15:33 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 26 15:29 text-train.arrow\r\n```"
'So if I understand correctly it means that the cached file generated by your separated script is different by the one used by the training script ?'
'Yes.\r\n\r\n1. `cache-69633651476e943b93c89ace715f9487.arrow` was generated with a separate script. \r\n2. I ran the entire script with `xla_spawn.py`.\r\n3. `cache-69633651476e943b93c89ace715f9487.arrow` is not used.\r\n4. `cache-0d77dfce704493dbe63f071eed6a5431.arrow` is created.\r\n5. training starts...\r\n\r\nNow, if I kill the process at step 5, and do the step 2 again, it will use `cache-0d77dfce704493dbe63f071eed6a5431.arrow` (cached file created at step 4) without any issue.\r\n\r\nI used the following to generate the first cached file.\r\n```python\r\ndataset = load_dataset("text", data_files=file_path, split="train")\r\ndataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\ndataset.set_format(type=\'torch\', columns=[\'input_ids\'])\r\n```'
"1. Here's the log from the first step.\r\n```\r\nDownloading and preparing dataset text/default-e84dd29acc4ad9ef (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/\r\n447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...\r\nDataset text downloaded and prepared to /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d. Subsequent calls will reuse this data.\r\n```\r\nThere's a file named `cache-7b1440ba7077af0f0d9035b5a55d01fc.arrow`, so it did create a cached file.\r\n```\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 26 15:59 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 26 15:58 ..\r\n-rw------- 1 ***** ***** 99M Aug 26 15:59 cache-7b1440ba7077af0f0d9035b5a55d01fc.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 26 15:58 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 15:58 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 26 15:58 text-train.arrow\r\n```\r\n2. Ideally, `cache-7b1440ba7077af0f0d9035b5a55d01fc.arrow` should be used in `run_language_modeling.py` (modified version using `nlp`) with `xla_spawn.py`. But it looks like it's creating a new cached file.\r\n\r\n```\r\n08/26/2020 16:13:03 - INFO - filelock - Lock 139635836351096 released on /home/*****/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.202fa4f84f552bff1f5400ae012663839c61efb3de068c6c8722d34ac0ea6192\r\n.py.lock\r\n08/26/2020 16:13:03 - WARNING - nlp.builder - Using custom data configuration default\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Overwrite dataset info from restored data version.\r\n08/26/2020 16:13:03 - INFO - nlp.info - Loading Dataset info from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Reusing dataset text (/home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Constructing Dataset for split train, from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:13:03 - INFO - nlp.utils.info_utils - All the checksums matched successfully for post processing resources\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Overwrite dataset info from restored data version.\r\n08/26/2020 16:13:03 - INFO - nlp.info - Loading Dataset info from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Reusing dataset text (/home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Constructing Dataset for split train, from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:13:03 - INFO - nlp.utils.info_utils - All the checksums matched successfully for post processing resources\r\n08/26/2020 16:13:05 - INFO - nlp.arrow_dataset - Caching processed dataset at /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d/cache-0d77dfce704493dbe\r\n63f071eed6a5431.arrow\r\n^M 0%| | 0/100 [00:00<?, ?it/s]08/26/2020 16:13:05 - INFO - nlp.arrow_dataset - Caching processed dataset at /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6\r\nfe661fe4d070d380d/cache-0d77dfce704493dbe63f071eed6a5431.arrow\r\n```\r\n\r\nThere are two cached files in the directory:\r\n\r\n```\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 26 16:14 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 26 15:58 ..\r\n-rw------- 1 ***** ***** 99M Aug 26 16:14 cache-0d77dfce704493dbe63f071eed6a5431.arrow\r\n-rw------- 1 ***** ***** 99M Aug 26 15:59 cache-7b1440ba7077af0f0d9035b5a55d01fc.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 26 16:13 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 16:13 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 26 15:58 text-train.arrow\r\n```\r\n\r\nIf I kill the process, and run it again, it will use the second cached file.\r\n\r\n```\r\n08/26/2020 16:19:52 - WARNING - nlp.builder - Using custom data configuration default\r\n08/26/2020 16:19:52 - INFO - nlp.builder - Overwrite dataset info from restored data version.\r\n08/26/2020 16:19:52 - INFO - nlp.info - Loading Dataset info from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:19:52 - INFO - nlp.builder - Reusing dataset text (/home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08/26/2020 16:19:52 - INFO - nlp.builder - Constructing Dataset for split train, from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:19:52 - INFO - nlp.utils.info_utils - All the checksums matched successfully for post processing resources\r\n08/26/2020 16:19:53 - INFO - nlp.arrow_dataset - Loading cached processed dataset at /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d/cache-0d77dfce70\r\n4493dbe63f071eed6a5431.arrow\r\n08/26/2020 16:19:53 - INFO - nlp.arrow_dataset - Set __getitem__(key) output type to torch for ['input_ids'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\n```"
'Thanks for all the details.\r\nThe two cached files are supposed to be the same. I suspect that the caching has a problem with the tokenizer.\r\nWhich tokenizer did you use ?'
'I trained a byte-level BPE tokenizer on my data with `tokenziers` library following this [example](https://github.com/huggingface/tokenizers/blob/master/bindings/python/examples/train_bytelevel_bpe.py).\r\n\r\nAnd I put these model files in a directory named `"model_name"`. I also put config.json, which is the original RoBERTa config file.\r\n\r\n```bash\r\n%ls model_name\r\nconfig.json merges.txt vocab.json\r\n```\r\n\r\n[This](https://github.com/huggingface/transformers/blob/4bd7be9a4268221d2a0000c7e8033aaeb365c03b/examples/language-modeling/run_language_modeling.py#L196) is the line where `run_language_modeling.py` loads the tokenier.\r\n\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)\r\n```\r\n\r\nI use `"model_name"` for `model_args.tokenizer_name`. I don\'t specify `model_args.cache_dir`. It is \'None\' by default.'
"In my separated script for caching, I'm using `use_fast=True` when initializing a tokenizer.\r\n\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(args.config_name, use_fast=True)\r\n```\r\nI wasn't using that option in the main script. That could be the reason..."
'Yea it could definitely explain why you have two different cache files.\r\nLet me know if using the same tokenizers on both sides fixes the issue'
'It still creates a new file even if I remove `use_fast=True`... \r\n\r\nHere\'s the script used to create a cached file.\r\n```python \r\n#!/usr/bin/env python3\r\n\r\nimport argparse\r\n\r\nfrom transformers import AutoTokenizer\r\n\r\nfrom nlp import load_dataset\r\n\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser(description=\'description\')\r\n parser.add_argument(\'--config_name\', type=str, help=\'Pretrained config name or path if not the same as model_name\')\r\n parser.add_argument(\'--data_file\', type=str, help=\'The input data file (a text file).\')\r\n parser.add_argument(\'--block_size\', type=int, default=-1, help=\'The training dataset will be truncated in block of this size for training\')\r\n args = parser.parse_args()\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(args.config_name)\r\n\r\n dataset = load_dataset("text", data_files=args.data_file, split="train")\r\n dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\n dataset.set_format(type=\'torch\', columns=[\'input_ids\'])\r\n\r\n\r\nif __name__ == "__main__":\r\n main()\r\n```\r\n\r\nHere\'s how the data is loaded in the modified `run_language_modeling.py`. [[original function](https://github.com/huggingface/transformers/blob/971d1802d009d9996b36a34a34477cee849ef39f/examples/language-modeling/run_language_modeling.py#L128-L135)]\r\n\r\n```python\r\ndef get_dataset(args: DataTrainingArguments, tokenizer: PreTrainedTokenizer, evaluate=False):\r\n file_path = args.eval_data_file if evaluate else args.train_data_file\r\n split = "validation" if evaluate else "train"\r\n if args.line_by_line:\r\n # return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)\r\n dataset = load_dataset("text", data_files=file_path, split="train")\r\n dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\n dataset.set_format(type=\'torch\', columns=[\'input_ids\'])\r\n return dataset\r\n\r\n else:\r\n return TextDataset(\r\n tokenizer=tokenizer, file_path=file_path, block_size=args.block_size, overwrite_cache=args.overwrite_cache\r\n )\r\n```\r\n\r\nProbably I don\'t need this part in the main script,\r\n\r\n```python\r\ndataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\n dataset.set_format(type=\'torch\', columns=[\'input_ids\'])\r\n```\r\nand simply do this?\r\n```python\r\ndataset = load_dataset("text", data_files=file_path, split="train")\r\nreturn dataset\r\n```'
'You need this part in the main script or it will use the dataset that is not tokenized\r\n\r\n'
'I can see that the tokenizer in `run_language_modeling.py` is not instantiated the same way as in your separated script.\r\nIndeed we can see L196:\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)\r\n```\r\nCould you try to make it so they are instantiated the exact same way please ?'
'I updated my separated script, but it\'s creating a cached file again. If I don\'t use the `model_args.cache_dir`, both will get `None`, so they should be the same.\r\n\r\n```python\r\n#!/usr/bin/env python3\r\nimport argparse\r\n\r\nfrom transformers import AutoTokenizer\r\nfrom nlp import load_dataset\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser(description=\'description\')\r\n parser.add_argument(\'--tokenizer_name\', type=str, help=\'Pretrained tokenizer name or path if not the same as model_name\')\r\n parser.add_argument(\'--data_file\', type=str, help=\'The input data file (a text file).\')\r\n parser.add_argument(\'--cache_dir\', type=str, default=None, help=\'Where do you want to store the pretrained models downloaded from s3\')\r\n parser.add_argument(\'--block_size\', type=int, default=-1, help=\'The training dataset will be truncated in block of this size for training\')\r\n\r\n model_args = parser.parse_args()\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)\r\n\r\n dataset = load_dataset("text", data_files=model_args.data_file, split="train")\r\n dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,\r\n truncation=True, max_length=model_args.block_size), batched=True)\r\n dataset.set_format(type=\'torch\', columns=[\'input_ids\'])\r\n\r\nif __name__ == "__main__":\r\n main()\r\n```\r\n\r\nIs there a way to specify the cache file to load, and skip the re-computation?'
'Could you also check that the `args.block_size` used in the lambda function is the same as well ?'
'Here\'s a minimal working example to reproduce this issue.\r\n\r\nAssumption:\r\n- You have access to TPU.\r\n- You have installed `transformers` and `nlp`.\r\n- You have tokenizer files (`config.json`, `merges.txt`, `vocab.json`) under the directory named `model_name`.\r\n- You have `xla_spawn.py` (Download from https://github.com/huggingface/transformers/blob/master/examples/xla_spawn.py).\r\n- You have saved the following script as `prepare_cached_dataset.py`.\r\n\r\n```python\r\n#!/usr/bin/env python3\r\nimport argparse\r\nfrom transformers import AutoTokenizer\r\nfrom nlp import load_dataset\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser(description=\'description\')\r\n parser.add_argument(\'--tokenizer_name\', type=str, help=\'Pretrained tokenizer name or path if not the same as model_name\')\r\n parser.add_argument(\'--data_file\', type=str, help=\'The input data file (a text file).\')\r\n parser.add_argument(\'--cache_dir\', type=str, default=None, help=\'Where do you want to store the pretrained models downloaded from s3\')\r\n parser.add_argument(\'--block_size\', type=int, default=-1, help=\'The training dataset will be truncated in block of this size for training\')\r\n parser.add_argument(\'--tpu_num_cores\', type=int, default=1, help=\'Number of TPU cores to use (1 or 8). For xla_apwan.py\')\r\n model_args = parser.parse_args()\r\n \r\n tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir, use_fast=True)\r\n \r\n dataset = load_dataset("text", data_files=model_args.data_file, split="train")\r\n dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,\r\n truncation=True, max_length=model_args.block_size), batched=True)\r\n dataset.set_format(type=\'torch\', columns=[\'input_ids\'])\r\n\r\ndef _mp_fn(index):\r\n # For xla_spawn (TPUs)\r\n main()\r\n\r\nif __name__ == "__main__":\r\n main()\r\n```\r\n\r\n- Run the following command. Replace `your_training_data` with some text file.\r\n\r\n```bash\r\nexport TRAIN_DATA=your_training_data\r\n\r\npython prepare_cached_dataset.py \\\r\n--tokenizer_name=model_name \\\r\n--block_size=512 \\\r\n--data_file=$TRAIN_DATA\r\n```\r\n- Check the cached directory.\r\n```bash\r\nls -lha /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\ntotal 132M\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 28 13:08 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 28 13:08 ..\r\n-rw------- 1 ***** ***** 99M Aug 28 13:08 cache-bfc7cb0702426d19242db5e8c079f04b.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 28 13:08 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 28 13:08 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 28 13:08 text-train.arrow\r\n```\r\n\r\n- Run the same script again. (The output should be just `Using custom data configuration default`.)\r\n```\r\npython prepare_cached_dataset.py \\\r\n--tokenizer_name=model_name \\\r\n--block_size=512 \\\r\n--data_file=$TRAIN_DATA\r\n```\r\n- Check the cached directory.\r\n```bash\r\nls -lha /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\ntotal 132M\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 28 13:08 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 28 13:08 ..\r\n-rw------- 1 ***** ***** 99M Aug 28 13:08 cache-bfc7cb0702426d19242db5e8c079f04b.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 28 13:20 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 28 13:20 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 28 13:08 text-train.arrow\r\n```\r\n- The cached file (`cache-bfc7cb0702426d19242db5e8c079f04b.arrow`) is reused.\r\n- Now, run this script with `xla_spawn.py`. Ideally, it should reuse the cached file, however, you will see each process is creating a cache file again.\r\n\r\n```bash\r\npython xla_spawn.py --num_cores 8 \\\r\nprepare_cached_dataset.py \\\r\n--tokenizer_name=model_name \\\r\n--block_size=512 \\\r\n--data_file=$TRAIN_DATA\r\n```\r\n\r\n- Check the cached directory. There are two arrrow files.\r\n```bash\r\nls -lha /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\ntotal 230M\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 28 13:25 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 28 13:08 ..\r\n-rw------- 1 ***** ***** 99M Aug 28 13:08 cache-bfc7cb0702426d19242db5e8c079f04b.arrow\r\n-rw------- 1 ***** ***** 99M Aug 28 13:25 cache-e0e2313e49c8a110aafcc8133154c19a.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 28 13:24 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 28 13:24 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 28 13:08 text-train.arrow\r\n```\r\n'
'I ended up specifying the `cache_file_name` argument when I call `map` function.\r\n\r\n```python\r\ndataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size),\r\n batched=True,\r\n cache_file_name=cache_file_name)\r\n```\r\n\r\nNote:\r\n- `text` dataset in `nlp` does not strip `"\\n"`. If you want the same output as in [`LineByLineTextDataset`](https://github.com/huggingface/transformers/blob/afc4ece462ad83a090af620ff4da099a0272e171/src/transformers/data/datasets/language_modeling.py#L88-L111), you would need to create your own dataset class where you replace `line` to `line.strip()` [here](https://github.com/huggingface/nlp/blob/master/datasets/text/text.py#L35).\r\n'] | 2020-08-25 14:36:38+00:00 | 2020-09-01 12:14:56+00:00 | null | NONE | null | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py#L131) as follows:
```python
# line 131: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)
dataset = load_dataset("text", data_files=file_path, split="train")
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
```
When I run this with [`xla_spawn.py`](https://github.com/huggingface/transformers/blob/master/examples/xla_spawn.py), I get the following error (it produces one message per core in TPU, which I believe is fine).
It seems the current version doesn't take into account distributed training processes as in [this example](https://github.com/huggingface/transformers/blob/a573777901e662ec2e565be312ffaeedef6effec/src/transformers/data/datasets/language_modeling.py#L35-L38)?
```
08/25/2020 13:59:41 - WARNING - nlp.builder - Using custom data configuration default
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Exception in device=TPU:6: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Exception in device=TPU:4: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Exception in device=TPU:1: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Exception in device=TPU:7: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Exception in device=TPU:3: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Exception in device=TPU:2: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Exception in device=TPU:0: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Traceback (most recent call last):
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
Traceback (most recent call last):
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
Traceback (most recent call last):
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Traceback (most recent call last):
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
```
| {'+1': 1, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 1, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/532/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/532/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/531 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/531/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/531/comments | https://api.github.com/repos/huggingface/datasets/issues/531/events | https://github.com/huggingface/datasets/pull/531 | 685,291,036 | MDExOlB1bGxSZXF1ZXN0NDczMDM4ODc4 | 531 | add concatenate_datasets to the docs | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | [] | 2020-08-25 08:40:05+00:00 | 2020-08-25 09:02:20+00:00 | 2020-08-25 09:02:19+00:00 | MEMBER | null | null | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/531/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/531/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/531.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/531', 'merged_at': '2020-08-25T09:02:19Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/531.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/531'} | true |
https://api.github.com/repos/huggingface/datasets/issues/530 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/530/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/530/comments | https://api.github.com/repos/huggingface/datasets/issues/530/events | https://github.com/huggingface/datasets/pull/530 | 684,825,612 | MDExOlB1bGxSZXF1ZXN0NDcyNjQ5NTk2 | 530 | use ragged tensor by default | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | ['Yes I agree. Maybe something that lets specify different format depending on the column ? Especially to better control dtype and shape (and ragged for tf)\r\n\r\nOh and I forgot: this one should also fix the second issue found in #477 for the next release'
'I am running into the same issue with the error message on my local windows machine -\r\nAttributeError: \'tensorflow.python.framework.ops.EagerTensor\' object has no attribute \'to_tensor\'. Tensorflow version is 2.6. Anything that I can do to fix it?\r\ntrain_features = {x: tf_train_dataset[x].to_tensor() for x in tokenizer.model_input_names}\r\ntrain_tf_dataset = tf.data.Dataset.from_tensor_slices((train_features, tf_train_dataset["label"]))\r\ntrain_tf_dataset = train_tf_dataset.shuffle(len(tf_train_dataset)).batch(8)\r\n\r\neval_features = {x: tf_eval_dataset[x].to_tensor() for x in tokenizer.model_input_names}\r\neval_tf_dataset = tf.data.Dataset.from_tensor_slices((eval_features, tf_eval_dataset["label"]))\r\neval_tf_dataset = eval_tf_dataset.batch(8)\r\n\r\nttributeError Traceback (most recent call last)\r\n<ipython-input-59-f50e45c2c0dc> in <module>\r\n----> 1 train_features = {x: tf_train_dataset[x].convert_to_tensor() for x in tokenizer.model_input_names}\r\n 2 train_tf_dataset = tf.data.Dataset.from_tensor_slices((train_features, tf_train_dataset["label"]))\r\n 3 train_tf_dataset = train_tf_dataset.shuffle(len(tf_train_dataset)).batch(8)\r\n 4 \r\n 5 eval_features = {x: tf_eval_dataset[x].to_tensor() for x in tokenizer.model_input_names}\r\n\r\n<ipython-input-59-f50e45c2c0dc> in <dictcomp>(.0)\r\n----> 1 train_features = {x: tf_train_dataset[x].convert_to_tensor() for x in tokenizer.model_input_names}\r\n 2 train_tf_dataset = tf.data.Dataset.from_tensor_slices((train_features, tf_train_dataset["label"]))\r\n 3 train_tf_dataset = train_tf_dataset.shuffle(len(tf_train_dataset)).batch(8)\r\n 4 \r\n 5 eval_features = {x: tf_eval_dataset[x].to_tensor() for x in tokenizer.model_input_names}\r\n\r\n~\\AppData\\Roaming\\Python\\Python38\\site-packages\\tensorflow\\python\\framework\\ops.py in __getattr__(self, name)\r\n 399 from tensorflow.python.ops.numpy_ops import np_config\r\n 400 np_config.enable_numpy_behavior()""".format(type(self).__name__, name))\r\n--> 401 self.__getattribute__(name)\r\n 402 \r\n 403 @staticmethod\r\n\r\nAttributeError: \'tensorflow.python.framework.ops.EagerTensor\' object has no attribute \'convert_to_tensor\'\r\n\r\n'
'Hi ! Before calling `to_tensor`, make sure that your object is a RaggedTensor, because it may already be a regular Tensor if the shapes of your examples are all the same'
'Okay. i am not familiar with how to check the difference between the two. I will research on this.'] | 2020-08-24 17:06:15+00:00 | 2021-10-22 19:38:40+00:00 | 2020-08-24 19:22:25+00:00 | MEMBER | null | I think it's better if it's clear whether the returned tensor is ragged or not when the type is set to tensorflow.
Previously it was a tensor (not ragged) if numpy could stack the output (which can change depending on the batch of example you take), which make things difficult to handle, as it may sometimes return a ragged tensor and sometimes not.
Therefore I reverted this behavior to always return a ragged tensor as we used to do. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/530/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/530/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/530.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/530', 'merged_at': '2020-08-24T19:22:25Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/530.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/530'} | true |
https://api.github.com/repos/huggingface/datasets/issues/529 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/529/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/529/comments | https://api.github.com/repos/huggingface/datasets/issues/529/events | https://github.com/huggingface/datasets/pull/529 | 684,797,157 | MDExOlB1bGxSZXF1ZXN0NDcyNjI2MDY4 | 529 | Add MLSUM | {'avatar_url': 'https://avatars.githubusercontent.com/u/36986299?v=4', 'events_url': 'https://api.github.com/users/RachelKer/events{/privacy}', 'followers_url': 'https://api.github.com/users/RachelKer/followers', 'following_url': 'https://api.github.com/users/RachelKer/following{/other_user}', 'gists_url': 'https://api.github.com/users/RachelKer/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/RachelKer', 'id': 36986299, 'login': 'RachelKer', 'node_id': 'MDQ6VXNlcjM2OTg2Mjk5', 'organizations_url': 'https://api.github.com/users/RachelKer/orgs', 'received_events_url': 'https://api.github.com/users/RachelKer/received_events', 'repos_url': 'https://api.github.com/users/RachelKer/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/RachelKer/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/RachelKer/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/RachelKer'} | [] | closed | false | null | [] | null | ["Could you test to run the test using the changes in #527 and let me know if it fixes the issue ? If so I'll merge it and we'll be good to go :)"
'Hello, it does work on the fixing real dataset branch. Merci Quentin :)'
'Nice, glad to hear that :)\r\nde rien !'] | 2020-08-24 16:18:35+00:00 | 2020-08-26 08:04:11+00:00 | 2020-08-26 08:04:11+00:00 | CONTRIBUTOR | null | Hello (again :) !),
So, I started a new branch because of a [rebase issue](https://github.com/huggingface/nlp/pull/463), sorry for the mess.
However, the command `pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_mlsum` still fails because there is no default language dataset : the script throws an error as a specific config language is necessary.
I think that setting a default language would be a bad workaround for this so I kept it as it is. Putting all the train files across languages together would also be a bad idea because of the size.
Thanks for your help,
Rachel
| {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/529/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/529/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/529.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/529', 'merged_at': '2020-08-26T08:04:10Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/529.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/529'} | true |
https://api.github.com/repos/huggingface/datasets/issues/528 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/528/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/528/comments | https://api.github.com/repos/huggingface/datasets/issues/528/events | https://github.com/huggingface/datasets/pull/528 | 684,673,673 | MDExOlB1bGxSZXF1ZXN0NDcyNTIzNDI1 | 528 | fix missing variable names in docs | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | ['The problem came from `default: ` that is rendered differently and hides the parameter names. I changed `default: ...` to `defaults to ...`'] | 2020-08-24 13:31:48+00:00 | 2020-08-25 09:04:04+00:00 | 2020-08-25 09:04:03+00:00 | MEMBER | null | fix #524 | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/528/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/528/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/528.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/528', 'merged_at': '2020-08-25T09:04:03Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/528.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/528'} | true |
https://api.github.com/repos/huggingface/datasets/issues/527 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/527/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/527/comments | https://api.github.com/repos/huggingface/datasets/issues/527/events | https://github.com/huggingface/datasets/pull/527 | 684,632,930 | MDExOlB1bGxSZXF1ZXN0NDcyNDg4MzUy | 527 | Fix config used for slow test on real dataset | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | [] | 2020-08-24 12:39:34+00:00 | 2020-08-25 09:20:45+00:00 | 2020-08-25 09:20:44+00:00 | MEMBER | null | As noticed in #470, #474, #476, #504 , the slow test `test_load_real_dataset` couldn't run on datasets that require config parameters.
To fix that I replaced it with one test with the first config of BUILDER_CONFIGS `test_load_real_dataset`, and another test that runs all of the configs in BUILDER_CONFIGS `test_load_real_dataset_all_configs` | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/527/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/527/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/527.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/527', 'merged_at': '2020-08-25T09:20:44Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/527.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/527'} | true |
https://api.github.com/repos/huggingface/datasets/issues/526 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/526/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/526/comments | https://api.github.com/repos/huggingface/datasets/issues/526/events | https://github.com/huggingface/datasets/pull/526 | 684,615,455 | MDExOlB1bGxSZXF1ZXN0NDcyNDczNjcw | 526 | Returning None instead of "python" if dataset is unformatted | {'avatar_url': 'https://avatars.githubusercontent.com/u/26709476?v=4', 'events_url': 'https://api.github.com/users/TevenLeScao/events{/privacy}', 'followers_url': 'https://api.github.com/users/TevenLeScao/followers', 'following_url': 'https://api.github.com/users/TevenLeScao/following{/other_user}', 'gists_url': 'https://api.github.com/users/TevenLeScao/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/TevenLeScao', 'id': 26709476, 'login': 'TevenLeScao', 'node_id': 'MDQ6VXNlcjI2NzA5NDc2', 'organizations_url': 'https://api.github.com/users/TevenLeScao/orgs', 'received_events_url': 'https://api.github.com/users/TevenLeScao/received_events', 'repos_url': 'https://api.github.com/users/TevenLeScao/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/TevenLeScao/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/TevenLeScao'} | [] | closed | false | null | [] | null | ['We have to change the tests to expect `None` instead of `python` then'
'Merging!'] | 2020-08-24 12:10:35+00:00 | 2020-08-24 12:50:43+00:00 | 2020-08-24 12:50:42+00:00 | CONTRIBUTOR | null | Following the discussion on Slack, this small fix ensures that calling `dataset.set_format(type=dataset.format["type"])` works properly. Slightly breaking as calling `dataset.format` when the dataset is unformatted will return `None` instead of `python`. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/526/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/526/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/526.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/526', 'merged_at': '2020-08-24T12:50:42Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/526.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/526'} | true |
https://api.github.com/repos/huggingface/datasets/issues/525 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/525/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/525/comments | https://api.github.com/repos/huggingface/datasets/issues/525/events | https://github.com/huggingface/datasets/issues/525 | 683,875,483 | MDU6SXNzdWU2ODM4NzU0ODM= | 525 | wmt download speed example | {'avatar_url': 'https://avatars.githubusercontent.com/u/6045025?v=4', 'events_url': 'https://api.github.com/users/sshleifer/events{/privacy}', 'followers_url': 'https://api.github.com/users/sshleifer/followers', 'following_url': 'https://api.github.com/users/sshleifer/following{/other_user}', 'gists_url': 'https://api.github.com/users/sshleifer/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/sshleifer', 'id': 6045025, 'login': 'sshleifer', 'node_id': 'MDQ6VXNlcjYwNDUwMjU=', 'organizations_url': 'https://api.github.com/users/sshleifer/orgs', 'received_events_url': 'https://api.github.com/users/sshleifer/received_events', 'repos_url': 'https://api.github.com/users/sshleifer/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/sshleifer/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/sshleifer/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/sshleifer'} | [] | closed | false | null | [] | null | ["Thanks for creating the issue :)\r\nThe download link for wmt-en-de raw looks like a mirror. We should use that instead of the current url.\r\nIs this mirror official ?\r\n\r\nAlso it looks like for `ro-en` it tried to download other languages. If we manage to only download the one that is asked it'd be cool\r\n\r\nAlso cc @patrickvonplaten "
'Mirror is not official.'
'Shall we host the files ourselves or it is fine to use this mirror in your opinion ?'
'Should we add an argument in `load_dataset` to override some URL with a custom URL (e.g. mirror) or a local path?\r\n\r\nThis could also be used to provide local files instead of the original files as requested by some users (e.g. when you made a dataset with the same format than SQuAD and what to use it instead of the official dataset files).'
"@lhoestq I think we should host it ourselves. I'll put the subset of wmt (without preprocessed files) that we need on s3 and post a link over the weekend."
'Is there a solution yet? The download speed is still too slow. 60-70kbps download for wmt16 and around 100kbps for wmt19. @sshleifer '
"I'm working on mirror links which will provide high download speed :)\r\nSee https://github.com/huggingface/datasets/issues/1892"
'Resolved via https://github.com/huggingface/datasets/pull/1912'] | 2020-08-21 23:29:06+00:00 | 2022-10-04 17:45:39+00:00 | 2022-10-04 17:45:39+00:00 | CONTRIBUTOR | null | Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.
```
import nlp
nlp.load_dataset('wmt16', 'de-en')
```
Downloads at 49.1 KB/S
Whereas
```
pip install gdown # download from google drive
!gdown https://drive.google.com/uc?id=1iO7um-HWoNoRKDtw27YUSgyeubn9uXqj
```
Downloads at 127 MB/s. (The file is a copy of wmt-en-de raw).
```
nlp.load_dataset('wmt16', 'ro-en')
```
goes at 27 MB/s, much faster.
if we wget the same data from s3 is the same download speed, but ¼ the file size:
```
wget https://s3.amazonaws.com/datasets.huggingface.co/translation/wmt_en_ro_packed_200_rand.tgz
```
Finally,
```
nlp.load_dataset('wmt19', 'zh-en')
```
Starts fast, but broken. (duplicate of #493 )
| {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/525/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/525/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/524 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/524/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/524/comments | https://api.github.com/repos/huggingface/datasets/issues/524/events | https://github.com/huggingface/datasets/issues/524 | 683,686,359 | MDU6SXNzdWU2ODM2ODYzNTk= | 524 | Some docs are missing parameter names | {'avatar_url': 'https://avatars.githubusercontent.com/u/4564897?v=4', 'events_url': 'https://api.github.com/users/jarednielsen/events{/privacy}', 'followers_url': 'https://api.github.com/users/jarednielsen/followers', 'following_url': 'https://api.github.com/users/jarednielsen/following{/other_user}', 'gists_url': 'https://api.github.com/users/jarednielsen/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/jarednielsen', 'id': 4564897, 'login': 'jarednielsen', 'node_id': 'MDQ6VXNlcjQ1NjQ4OTc=', 'organizations_url': 'https://api.github.com/users/jarednielsen/orgs', 'received_events_url': 'https://api.github.com/users/jarednielsen/received_events', 'repos_url': 'https://api.github.com/users/jarednielsen/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/jarednielsen/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jarednielsen/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/jarednielsen'} | [] | closed | false | null | [] | null | ['Indeed, good catch!'] | 2020-08-21 16:47:34+00:00 | 2020-08-25 09:04:03+00:00 | 2020-08-25 09:04:03+00:00 | CONTRIBUTOR | null | See https://huggingface.co/nlp/master/package_reference/main_classes.html#nlp.Dataset.map. I believe this is because the parameter names are enclosed in backticks in the docstrings, maybe it's an old docstring format that doesn't work with the current Sphinx version. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/524/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/524/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/523 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/523/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/523/comments | https://api.github.com/repos/huggingface/datasets/issues/523/events | https://github.com/huggingface/datasets/pull/523 | 682,573,232 | MDExOlB1bGxSZXF1ZXN0NDcwNzkxMjA1 | 523 | Speed up Tokenization by optimizing cast_to_python_objects | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | ['I took your comments into account and added tests for `cast_to_python_objects`'] | 2020-08-20 09:42:02+00:00 | 2020-08-24 08:54:15+00:00 | 2020-08-24 08:54:14+00:00 | MEMBER | null | I changed how `cast_to_python_objects` works to make it faster.
It is used to cast numpy/pytorch/tensorflow/pandas objects to python lists, and it works recursively.
To avoid iterating over possibly long lists, it first checks if the first element that is not None has to be casted.
If the first element needs to be casted, then all the elements of the list will be casted, otherwise they'll stay the same.
This trick allows to cast objects that contain tokenizers outputs without iterating over every single token for example.
Speed improvement:
```python
import transformers
import nlp
tok = transformers.BertTokenizerFast.from_pretrained("bert-base-uncased")
txt = ["a " * 512] * 1000
dataset = nlp.Dataset.from_dict({"txt": txt})
# Tokenization using .map is now faster. Previously it was taking 3.5s
%time _ = dataset.map(lambda x: tok(x["txt"]), batched=True, load_from_cache_file=False)
# 450ms
# for comparison
%time _ = tok(txt)
# 280ms
``` | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 1, 'laugh': 0, 'rocket': 0, 'total_count': 1, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/523/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/523/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/523.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/523', 'merged_at': '2020-08-24T08:54:14Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/523.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/523'} | true |
https://api.github.com/repos/huggingface/datasets/issues/522 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/522/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/522/comments | https://api.github.com/repos/huggingface/datasets/issues/522/events | https://github.com/huggingface/datasets/issues/522 | 682,478,833 | MDU6SXNzdWU2ODI0Nzg4MzM= | 522 | dictionnary typo in docs | {'avatar_url': 'https://avatars.githubusercontent.com/u/4004127?v=4', 'events_url': 'https://api.github.com/users/yonigottesman/events{/privacy}', 'followers_url': 'https://api.github.com/users/yonigottesman/followers', 'following_url': 'https://api.github.com/users/yonigottesman/following{/other_user}', 'gists_url': 'https://api.github.com/users/yonigottesman/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/yonigottesman', 'id': 4004127, 'login': 'yonigottesman', 'node_id': 'MDQ6VXNlcjQwMDQxMjc=', 'organizations_url': 'https://api.github.com/users/yonigottesman/orgs', 'received_events_url': 'https://api.github.com/users/yonigottesman/received_events', 'repos_url': 'https://api.github.com/users/yonigottesman/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/yonigottesman/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/yonigottesman/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/yonigottesman'} | [] | closed | false | null | [] | null | ['Thanks!'] | 2020-08-20 07:11:05+00:00 | 2020-08-20 07:52:14+00:00 | 2020-08-20 07:52:13+00:00 | CONTRIBUTOR | null | Many places dictionary is spelled dictionnary, not sure if its on purpose or not.
Fixed in this pr:
https://github.com/huggingface/nlp/pull/521 | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/522/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/522/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/521 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/521/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/521/comments | https://api.github.com/repos/huggingface/datasets/issues/521/events | https://github.com/huggingface/datasets/pull/521 | 682,477,648 | MDExOlB1bGxSZXF1ZXN0NDcwNzEyNzgz | 521 | Fix dictionnary (dictionary) typo | {'avatar_url': 'https://avatars.githubusercontent.com/u/4004127?v=4', 'events_url': 'https://api.github.com/users/yonigottesman/events{/privacy}', 'followers_url': 'https://api.github.com/users/yonigottesman/followers', 'following_url': 'https://api.github.com/users/yonigottesman/following{/other_user}', 'gists_url': 'https://api.github.com/users/yonigottesman/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/yonigottesman', 'id': 4004127, 'login': 'yonigottesman', 'node_id': 'MDQ6VXNlcjQwMDQxMjc=', 'organizations_url': 'https://api.github.com/users/yonigottesman/orgs', 'received_events_url': 'https://api.github.com/users/yonigottesman/received_events', 'repos_url': 'https://api.github.com/users/yonigottesman/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/yonigottesman/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/yonigottesman/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/yonigottesman'} | [] | closed | false | null | [] | null | ['Hahah thanks Yonatan. It was not on purpose, we are just not very good at spelling :)'] | 2020-08-20 07:09:02+00:00 | 2020-08-20 07:52:04+00:00 | 2020-08-20 07:52:04+00:00 | CONTRIBUTOR | null | This error happens many times I'm thinking maybe its spelled like this on purpose? | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/521/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/521/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/521.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/521', 'merged_at': '2020-08-20T07:52:04Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/521.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/521'} | true |
https://api.github.com/repos/huggingface/datasets/issues/520 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/520/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/520/comments | https://api.github.com/repos/huggingface/datasets/issues/520/events | https://github.com/huggingface/datasets/pull/520 | 682,264,839 | MDExOlB1bGxSZXF1ZXN0NDcwNTI4MDE0 | 520 | Transform references for sacrebleu | {'avatar_url': 'https://avatars.githubusercontent.com/u/2238344?v=4', 'events_url': 'https://api.github.com/users/jbragg/events{/privacy}', 'followers_url': 'https://api.github.com/users/jbragg/followers', 'following_url': 'https://api.github.com/users/jbragg/following{/other_user}', 'gists_url': 'https://api.github.com/users/jbragg/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/jbragg', 'id': 2238344, 'login': 'jbragg', 'node_id': 'MDQ6VXNlcjIyMzgzNDQ=', 'organizations_url': 'https://api.github.com/users/jbragg/orgs', 'received_events_url': 'https://api.github.com/users/jbragg/received_events', 'repos_url': 'https://api.github.com/users/jbragg/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/jbragg/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jbragg/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/jbragg'} | [] | closed | false | null | [] | null | ['I think I agree @lhoestq so I pushed a change.\r\nThanks for your work on the library!'] | 2020-08-20 00:26:55+00:00 | 2020-08-20 09:30:54+00:00 | 2020-08-20 09:30:53+00:00 | CONTRIBUTOR | null | Currently it is impossible to use sacrebleu when len(predictions) != the number of references per prediction (very uncommon), due to a strange format expected by sacrebleu. If one passes in the data to `nlp.metric.compute()` in sacrebleu format, `nlp` throws an error due to mismatching lengths between predictions and references. If one uses a more standard format where predictions and references are lists of the same length, sacrebleu throws an error.
This PR transforms reference data in a more standard format into the [unusual format](https://github.com/mjpost/sacreBLEU#using-sacrebleu-from-python) expected by sacrebleu. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/520/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/520/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/520.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/520', 'merged_at': '2020-08-20T09:30:53Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/520.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/520'} | true |
https://api.github.com/repos/huggingface/datasets/issues/519 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/519/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/519/comments | https://api.github.com/repos/huggingface/datasets/issues/519/events | https://github.com/huggingface/datasets/issues/519 | 682,193,882 | MDU6SXNzdWU2ODIxOTM4ODI= | 519 | [BUG] Metrics throwing new error on master since 0.4.0 | {'avatar_url': 'https://avatars.githubusercontent.com/u/2238344?v=4', 'events_url': 'https://api.github.com/users/jbragg/events{/privacy}', 'followers_url': 'https://api.github.com/users/jbragg/followers', 'following_url': 'https://api.github.com/users/jbragg/following{/other_user}', 'gists_url': 'https://api.github.com/users/jbragg/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/jbragg', 'id': 2238344, 'login': 'jbragg', 'node_id': 'MDQ6VXNlcjIyMzgzNDQ=', 'organizations_url': 'https://api.github.com/users/jbragg/orgs', 'received_events_url': 'https://api.github.com/users/jbragg/received_events', 'repos_url': 'https://api.github.com/users/jbragg/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/jbragg/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jbragg/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/jbragg'} | [] | closed | false | null | [] | null | ['Update - maybe this is only failing on bleu because I was not tokenizing inputs to the metric'
'Closing - seems to be just forgetting to tokenize. And found the helpful discussion in huggingface/evaluate#105 '] | 2020-08-19 21:29:15+00:00 | 2022-06-02 16:41:01+00:00 | 2020-08-19 22:04:40+00:00 | CONTRIBUTOR | null | The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu.
Wasn't happening on 0.4.0 but happening now on master.
```
File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute
self.add_batch(predictions=predictions, references=references)
File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 242, in add_batch
batch = self.info.features.encode_batch(batch)
File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 527, in encode_batch
encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column]
File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 527, in <listcomp>
encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column]
File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 456, in encode_nested_example
raise ValueError("Got a string but expected a list instead: '{}'".format(obj))
``` | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/519/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/519/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/518 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/518/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/518/comments | https://api.github.com/repos/huggingface/datasets/issues/518/events | https://github.com/huggingface/datasets/pull/518 | 682,131,165 | MDExOlB1bGxSZXF1ZXN0NDcwNDE0ODE1 | 518 | [METRICS, breaking] Refactor caching behavior, pickle/cloudpickle metrics and dataset, add tests on metrics | {'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/thomwolf', 'id': 7353373, 'login': 'thomwolf', 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/thomwolf'} | [] | closed | false | null | [] | null | ['(test failure is unrelated)'
'As discussed with @thomwolf merging since the hyperparameter-search has been merged in transformers.'] | 2020-08-19 19:43:08+00:00 | 2020-08-24 16:01:40+00:00 | 2020-08-24 16:01:39+00:00 | MEMBER | null | Move the acquisition of the filelock at a later stage during metrics processing so it can be pickled/cloudpickled after instantiation.
Also add some tests on pickling, concurrent but separate metric instances and concurrent and distributed metric instances.
Changes significantly the caching behavior for the metrics:
- if the metric is used in a non-distributed setup (most common case) we try to find a free cache file using UUID instead of asking for an `experiment_id` if we can't lock the cache file this allows to use several instances of the same metrics in parallel.
- if the metrics is used in a distributed setup we ask for an `experiment_id` if we can't lock the cache file (because all the nodes need to have related cache file names for the final sync.
- after the computation, we free the locks and delete all the cache files.
Breaking: Some arguments for Metrics initialization have been removed for simplicity (`version`...) and some have been renamed for consistency with the rest of the library (`in_memory` => `keep_in_memory`).
Also remove the `_has_transformers` detection in utils to avoid importing transformers everytime during loading. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/518/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/518/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/518.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/518', 'merged_at': '2020-08-24T16:01:39Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/518.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/518'} | true |
https://api.github.com/repos/huggingface/datasets/issues/517 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/517/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/517/comments | https://api.github.com/repos/huggingface/datasets/issues/517/events | https://github.com/huggingface/datasets/issues/517 | 681,896,944 | MDU6SXNzdWU2ODE4OTY5NDQ= | 517 | add MLDoc dataset | {'avatar_url': 'https://avatars.githubusercontent.com/u/13238952?v=4', 'events_url': 'https://api.github.com/users/jxmorris12/events{/privacy}', 'followers_url': 'https://api.github.com/users/jxmorris12/followers', 'following_url': 'https://api.github.com/users/jxmorris12/following{/other_user}', 'gists_url': 'https://api.github.com/users/jxmorris12/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/jxmorris12', 'id': 13238952, 'login': 'jxmorris12', 'node_id': 'MDQ6VXNlcjEzMjM4OTUy', 'organizations_url': 'https://api.github.com/users/jxmorris12/orgs', 'received_events_url': 'https://api.github.com/users/jxmorris12/received_events', 'repos_url': 'https://api.github.com/users/jxmorris12/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/jxmorris12/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jxmorris12/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/jxmorris12'} | [{'color': 'e99695', 'default': False, 'description': 'Requesting to add a new dataset', 'id': 2067376369, 'name': 'dataset request', 'node_id': 'MDU6TGFiZWwyMDY3Mzc2MzY5', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20request'}] | open | false | null | [] | null | ['Any updates on this?'
'This request is still an open issue waiting to be addressed by any community member, @GuillemGSubies.'] | 2020-08-19 14:41:59+00:00 | 2021-08-03 05:59:33+00:00 | null | CONTRIBUTOR | null | Hi,
I am recommending that someone add MLDoc, a multilingual news topic classification dataset.
- Here's a link to the Github: https://github.com/facebookresearch/MLDoc
- and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf
Looks like the dataset contains news stories in multiple languages that can be classified into four hierarchical groups: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets). There are 13 languages: Dutch, French, German, Chinese, Japanese, Russian, Portuguese, Spanish, Latin American Spanish, Italian, Danish, Norwegian, and Swedish | {'+1': 4, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 4, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/517/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/517/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/516 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/516/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/516/comments | https://api.github.com/repos/huggingface/datasets/issues/516/events | https://github.com/huggingface/datasets/pull/516 | 681,846,032 | MDExOlB1bGxSZXF1ZXN0NDcwMTY5NTA0 | 516 | [Breaking] Rename formated to formatted | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | [] | 2020-08-19 13:35:23+00:00 | 2020-08-20 08:41:17+00:00 | 2020-08-20 08:41:16+00:00 | MEMBER | null | `formated` is not correct but `formatted` is | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/516/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/516/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/516.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/516', 'merged_at': '2020-08-20T08:41:16Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/516.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/516'} | true |
https://api.github.com/repos/huggingface/datasets/issues/515 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/515/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/515/comments | https://api.github.com/repos/huggingface/datasets/issues/515/events | https://github.com/huggingface/datasets/pull/515 | 681,845,619 | MDExOlB1bGxSZXF1ZXN0NDcwMTY5MTQ0 | 515 | Fix batched map for formatted dataset | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | [] | 2020-08-19 13:34:50+00:00 | 2020-08-20 20:30:43+00:00 | 2020-08-20 20:30:42+00:00 | MEMBER | null | If you had a dataset formatted as numpy for example, and tried to do a batched map, then it would crash because one of the elements from the inputs was missing for unchanged columns (ex: batch of length 999 instead of 1000).
The happened during the creation of the `pa.Table`, since columns had different lengths. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/515/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/515/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/515.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/515', 'merged_at': '2020-08-20T20:30:42Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/515.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/515'} | true |
https://api.github.com/repos/huggingface/datasets/issues/514 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/514/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/514/comments | https://api.github.com/repos/huggingface/datasets/issues/514/events | https://github.com/huggingface/datasets/issues/514 | 681,256,348 | MDU6SXNzdWU2ODEyNTYzNDg= | 514 | dataset.shuffle(keep_in_memory=True) is never allowed | {'avatar_url': 'https://avatars.githubusercontent.com/u/24683907?v=4', 'events_url': 'https://api.github.com/users/vegarab/events{/privacy}', 'followers_url': 'https://api.github.com/users/vegarab/followers', 'following_url': 'https://api.github.com/users/vegarab/following{/other_user}', 'gists_url': 'https://api.github.com/users/vegarab/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/vegarab', 'id': 24683907, 'login': 'vegarab', 'node_id': 'MDQ6VXNlcjI0NjgzOTA3', 'organizations_url': 'https://api.github.com/users/vegarab/orgs', 'received_events_url': 'https://api.github.com/users/vegarab/received_events', 'repos_url': 'https://api.github.com/users/vegarab/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/vegarab/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/vegarab/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/vegarab'} | [{'color': '7057ff', 'default': True, 'description': 'Good for newcomers', 'id': 1935892877, 'name': 'good first issue', 'node_id': 'MDU6TGFiZWwxOTM1ODkyODc3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue'}
{'color': 'DF8D62', 'default': False, 'description': '', 'id': 4614514401, 'name': 'hacktoberfest', 'node_id': 'LA_kwDODunzps8AAAABEwvm4Q', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest'}] | closed | false | null | [] | null | ['This seems to be fixed in #513 for the filter function, replacing `cache_file_name` with `indices_cache_file_name` in the assert. Although not for the `map()` function @thomwolf '
"Maybe I'm a bit tired but I fail to see the issue here.\r\n\r\nSince `cache_file_name` is `None` by default, if you set `keep_in_memory` to `True`, the assert should pass, no?"
'I failed to realise that this only applies to `shuffle()`. Whenever `keep_in_memory` is set to True, this is passed on to the `select()` function. However, if `cache_file_name` is None, it will be defined in the `shuffle()` function before it is passed on to `select()`. \r\n\r\nThus, `select()` is called with `keep_in_memory=True` and a not None value for `cache_file_name`. \r\nThis is essentially fixed in #513 \r\n\r\nEasily reproducible:\r\n```python\r\n>>> import nlp\r\n>>> data = nlp.load_dataset("cosmos_qa", split="train")\r\nUsing custom data configuration default\r\n>>> data.shuffle(keep_in_memory=True)\r\nTraceback (most recent call last):\r\n File "<stdin>", line 1, in <module>\r\n File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 1398, in shuffle\r\n verbose=verbose,\r\n File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 1178, in select\r\n ), "Please use either `keep_in_memory` or `cache_file_name` but not both."\r\nAssertionError: Please use either `keep_in_memory` or `cache_file_name` but not both.\r\n>>>data.select([0], keep_in_memory=True)\r\n# No error\r\n```'
'Oh yes ok got it thanks. Should be fixed if we are happy with #513 indeed.'
"My bad. This is actually not fixed in #513. Sorry about that...\r\nThe new `indices_cache_file_name` is set to a non-None value in the new `shuffle()` as well. \r\n\r\nThe buffer and caching mechanisms used in the `select()` function are too intricate for me to understand why the check is there at all. I've removed it in my local build and it seems to be working fine for my project, without really considering other implications of the change. \r\n\r\n"
"Ok I'll investigate and add a series of tests on the `keep_in_memory=True` settings which is under-tested atm"
'Hey, still seeing this issue with the latest version.' 'The same :('
'These are the steps needed to fix this issue:\r\n1. add the following check to `Dataset.shuffle`:\r\n```python\r\nif keep_in_memory and indices_cache_file_name is not None:\r\n raise ValueError("Please use either `keep_in_memory` or `indices_cache_file_name` but not both.")\r\n```\r\n2. set `indices_cache_file_name` to `None` if `keep_in_memory` is True in the call to `select`\r\n3. add a test with `shuffle(keep_in_memory=True)`'
'Hi @mariosasko , I have opened this PR #5082 '] | 2020-08-18 18:47:40+00:00 | 2022-10-10 12:21:58+00:00 | 2022-10-10 12:21:58+00:00 | CONTRIBUTOR | null | As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory or cache_file_name is None
), "Please use either `keep_in_memory` or `cache_file_name` but not both."
```
This affects both `shuffle()` as `select()` is a sub-routine, and `map()` that has the same check.
I'd love to fix this myself, but unsure what the intention of the assert is given the rest of the logic in the function concerning `ccache_file_name` and `keep_in_memory`. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/514/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/514/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/513 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/513/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/513/comments | https://api.github.com/repos/huggingface/datasets/issues/513/events | https://github.com/huggingface/datasets/pull/513 | 681,215,612 | MDExOlB1bGxSZXF1ZXN0NDY5NjQxMjg1 | 513 | [speedup] Use indices mappings instead of deepcopy for all the samples reordering methods | {'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'events_url': 'https://api.github.com/users/thomwolf/events{/privacy}', 'followers_url': 'https://api.github.com/users/thomwolf/followers', 'following_url': 'https://api.github.com/users/thomwolf/following{/other_user}', 'gists_url': 'https://api.github.com/users/thomwolf/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/thomwolf', 'id': 7353373, 'login': 'thomwolf', 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'organizations_url': 'https://api.github.com/users/thomwolf/orgs', 'received_events_url': 'https://api.github.com/users/thomwolf/received_events', 'repos_url': 'https://api.github.com/users/thomwolf/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/thomwolf/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/thomwolf/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/thomwolf'} | [] | closed | false | null | [] | null | ["Ok I fixed `concatenate_datasets` and added tests\r\nFeel free to merge if it's good for you @thomwolf "
"Ok, adding some benchmarks for map/filters and then I'll merge"
'Warning from pytorch that we should maybe consider at some point @lhoestq:\r\n```\r\n/__w/nlp/nlp/src/nlp/arrow_dataset.py:648: UserWarning: The given NumPy array is not writeable,\r\nand PyTorch does not support non-writeable tensors. This means you can write to the underlying\r\n(supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to\r\nprotect its data or make it writeable before converting it to a tensor. This type of warning will be\r\nsuppressed for the rest of this program.\r\n(Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)\r\n532\r\n return torch.tensor(x, **format_kwargs)\r\n```'
"> Warning from pytorch that we should maybe consider at some point @lhoestq:\r\n> \r\n> ```\r\n> /__w/nlp/nlp/src/nlp/arrow_dataset.py:648: UserWarning: The given NumPy array is not writeable,\r\n> and PyTorch does not support non-writeable tensors. This means you can write to the underlying\r\n> (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to\r\n> protect its data or make it writeable before converting it to a tensor. This type of warning will be\r\n> suppressed for the rest of this program.\r\n> (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)\r\n> 532\r\n> return torch.tensor(x, **format_kwargs)\r\n> ```\r\n\r\nNot sure why we have that, it's probably linked to zero copy from arrow to numpy"] | 2020-08-18 17:36:02+00:00 | 2020-08-28 08:41:51+00:00 | 2020-08-28 08:41:50+00:00 | MEMBER | null | Use an indices mapping instead of rewriting the dataset for all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`).
Added a `flatten_indices` method which copy the dataset to a new table to remove the indices mapping with tests.
All the samples re-ordering/selection methods should be a lot faster. The downside is that iterating on very large batch of the dataset might be a little slower when we have changed the order of the samples since with in these case we use `pyarrow.Table.take` instead of `pyarrow.Table.slice`. There is no free lunch but the speed of iterating over the dataset is rarely the bottleneck.
*Backward breaking change*: the `cache_file_name` argument in all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`) is now called `indices_cache_file_name` on purpose to make it explicit to the user that this caching file is used for caching the indices mapping and not the dataset itself. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 1, 'total_count': 1, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/513/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/513/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/513.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/513', 'merged_at': '2020-08-28T08:41:50Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/513.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/513'} | true |
https://api.github.com/repos/huggingface/datasets/issues/512 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/512/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/512/comments | https://api.github.com/repos/huggingface/datasets/issues/512/events | https://github.com/huggingface/datasets/pull/512 | 681,137,164 | MDExOlB1bGxSZXF1ZXN0NDY5NTc2NzE3 | 512 | Delete CONTRIBUTING.md | {'avatar_url': 'https://avatars.githubusercontent.com/u/56394989?v=4', 'events_url': 'https://api.github.com/users/ChenZehong13/events{/privacy}', 'followers_url': 'https://api.github.com/users/ChenZehong13/followers', 'following_url': 'https://api.github.com/users/ChenZehong13/following{/other_user}', 'gists_url': 'https://api.github.com/users/ChenZehong13/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/ChenZehong13', 'id': 56394989, 'login': 'ChenZehong13', 'node_id': 'MDQ6VXNlcjU2Mzk0OTg5', 'organizations_url': 'https://api.github.com/users/ChenZehong13/orgs', 'received_events_url': 'https://api.github.com/users/ChenZehong13/received_events', 'repos_url': 'https://api.github.com/users/ChenZehong13/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/ChenZehong13/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/ChenZehong13/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/ChenZehong13'} | [] | closed | false | null | [] | null | ['😱' "Yeah, this is spammy behavior. I've reported the user handle."] | 2020-08-18 15:33:25+00:00 | 2020-08-18 15:48:21+00:00 | 2020-08-18 15:39:07+00:00 | NONE | null | null | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/512/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/512/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/512.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/512', 'merged_at': None, 'patch_url': 'https://github.com/huggingface/datasets/pull/512.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/512'} | true |
https://api.github.com/repos/huggingface/datasets/issues/511 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/511/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/511/comments | https://api.github.com/repos/huggingface/datasets/issues/511/events | https://github.com/huggingface/datasets/issues/511 | 681,055,553 | MDU6SXNzdWU2ODEwNTU1NTM= | 511 | dataset.shuffle() and select() resets format. Intended? | {'avatar_url': 'https://avatars.githubusercontent.com/u/24683907?v=4', 'events_url': 'https://api.github.com/users/vegarab/events{/privacy}', 'followers_url': 'https://api.github.com/users/vegarab/followers', 'following_url': 'https://api.github.com/users/vegarab/following{/other_user}', 'gists_url': 'https://api.github.com/users/vegarab/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/vegarab', 'id': 24683907, 'login': 'vegarab', 'node_id': 'MDQ6VXNlcjI0NjgzOTA3', 'organizations_url': 'https://api.github.com/users/vegarab/orgs', 'received_events_url': 'https://api.github.com/users/vegarab/received_events', 'repos_url': 'https://api.github.com/users/vegarab/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/vegarab/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/vegarab/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/vegarab'} | [] | closed | false | null | [] | null | ["Hi @vegarab yes feel free to open a discussion here.\r\n\r\nThis design choice was not very much thought about.\r\n\r\nSince `dataset.select()` (like all the method without a trailing underscore) is non-destructive and returns a new dataset it has most of its properties initialized from scratch (except the table and infos).\r\n\r\nThinking about it I don't see a strong reason against transmitting the format from the parent dataset to its newly created child. It's probably what's expected by the user in most cases. What do you think @lhoestq?\r\n\r\nBy the way, I've been working today on a refactoring of all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`). The idea is to speed them up by a lot (like, really a lot) by working as much as possible with an indices mapping table instead of doing a deep copy of the full dataset as we've been doing currently. You can give it a look and try it here: https://github.com/huggingface/nlp/pull/513\r\nFeedbacks are very much welcome"
"I think it's ok to keep the format.\r\nIf we want to have this behavior for `.map` too we just have to make sure it doesn't keep a column that's been removed."
'Shall we have this in the coming release by the way @lhoestq ?'
'Yes sure !'
'Since datasets 1.0.0 the format is not reset anymore.\r\nClosing this one, but feel free to re-open if you have other questions'] | 2020-08-18 13:46:01+00:00 | 2020-09-14 08:45:38+00:00 | 2020-09-14 08:45:38+00:00 | CONTRIBUTOR | null | Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight?
When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later loading the dataset object using `torch.load("dataset.pt")`, which conserves the defined format before saving.
I do shuffling and selecting (for controlling dataset size) after loading the data from .pt-file, as it's convenient whenever you train multiple models with varying sizes of the same dataset.
The obvious workaround for this is to set the format again after using `dataset.select()` or `dataset.shuffle()`.
_I guess this is more of a discussion on the design philosophy of the functions. Please let me know if this is not the right channel for these kinds of discussions or if they are not wanted at all!_
#### How to reproduce:
```python
import nlp
from transformers import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("t5-base")
def create_features(batch):
context_encoding = tokenizer.batch_encode_plus(batch["context"])
return {"input_ids": context_encoding["input_ids"]}
dataset = nlp.load_dataset("cosmos_qa", split="train")
dataset = dataset.map(create_features, batched=True)
dataset.set_format(type="torch", columns=["input_ids"])
dataset[0]
# {'input_ids': tensor([ 1804, 3525, 1602, ... 0, 0])}
dataset = dataset.shuffle()
dataset[0]
# {'id': '3Q9(...)20', 'context': "Good Old War an (...) play ?', 'answer0': 'None of the above choices .', 'answer1': 'This person likes music and likes to see the show , they will see other bands play .', (...) 'input_ids': [1804, 3525, 1602, ... , 0, 0]}
``` | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/511/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/511/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/510 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/510/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/510/comments | https://api.github.com/repos/huggingface/datasets/issues/510/events | https://github.com/huggingface/datasets/issues/510 | 680,823,644 | MDU6SXNzdWU2ODA4MjM2NDQ= | 510 | Version of numpy to use the library | {'avatar_url': 'https://avatars.githubusercontent.com/u/6966175?v=4', 'events_url': 'https://api.github.com/users/isspek/events{/privacy}', 'followers_url': 'https://api.github.com/users/isspek/followers', 'following_url': 'https://api.github.com/users/isspek/following{/other_user}', 'gists_url': 'https://api.github.com/users/isspek/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/isspek', 'id': 6966175, 'login': 'isspek', 'node_id': 'MDQ6VXNlcjY5NjYxNzU=', 'organizations_url': 'https://api.github.com/users/isspek/orgs', 'received_events_url': 'https://api.github.com/users/isspek/received_events', 'repos_url': 'https://api.github.com/users/isspek/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/isspek/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/isspek/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/isspek'} | [] | closed | false | null | [] | null | ["Seems like this method was added in 1.17. I'll add a requirement on this."
'Thank you so much. After upgrading the numpy library, it worked.'] | 2020-08-18 08:59:13+00:00 | 2020-08-19 18:35:56+00:00 | 2020-08-19 18:35:56+00:00 | NONE | null | Thank you so much for your excellent work! I would like to use nlp library in my project. While importing nlp, I am receiving the following error `AttributeError: module 'numpy.random' has no attribute 'Generator'` Numpy version in my project is 1.16.0. May I learn which numpy version is used for the nlp library.
Thanks in advance. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/510/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/510/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/509 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/509/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/509/comments | https://api.github.com/repos/huggingface/datasets/issues/509/events | https://github.com/huggingface/datasets/issues/509 | 679,711,585 | MDU6SXNzdWU2Nzk3MTE1ODU= | 509 | Converting TensorFlow dataset example | {'avatar_url': 'https://avatars.githubusercontent.com/u/22762845?v=4', 'events_url': 'https://api.github.com/users/saareliad/events{/privacy}', 'followers_url': 'https://api.github.com/users/saareliad/followers', 'following_url': 'https://api.github.com/users/saareliad/following{/other_user}', 'gists_url': 'https://api.github.com/users/saareliad/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/saareliad', 'id': 22762845, 'login': 'saareliad', 'node_id': 'MDQ6VXNlcjIyNzYyODQ1', 'organizations_url': 'https://api.github.com/users/saareliad/orgs', 'received_events_url': 'https://api.github.com/users/saareliad/received_events', 'repos_url': 'https://api.github.com/users/saareliad/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/saareliad/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/saareliad/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/saareliad'} | [] | closed | false | null | [] | null | ["Do you want to convert a dataset script to the tfds format ?\r\nIf so, we currently have a comversion script nlp/commands/convert.py but it is a conversion script that goes from tfds to nlp.\r\nI think it shouldn't be too hard to do the changes in reverse (at some manual adjustments).\r\nIf you manage to make it work in reverse, feel free to open a PR to share it with the community :)"
'In our docs: [Using a Dataset with PyTorch/Tensorflow](https://huggingface.co/docs/datasets/torch_tensorflow.html).'] | 2020-08-16 08:05:20+00:00 | 2021-08-03 06:01:18+00:00 | 2021-08-03 06:01:17+00:00 | NONE | null | Hi,
I want to use TensorFlow datasets with this repo, I noticed you made some conversion script,
can you give a simple example of using it?
Thanks
| {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/509/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/509/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/508 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/508/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/508/comments | https://api.github.com/repos/huggingface/datasets/issues/508/events | https://github.com/huggingface/datasets/issues/508 | 679,705,734 | MDU6SXNzdWU2Nzk3MDU3MzQ= | 508 | TypeError: Receiver() takes no arguments | {'avatar_url': 'https://avatars.githubusercontent.com/u/1225851?v=4', 'events_url': 'https://api.github.com/users/sebastiantomac/events{/privacy}', 'followers_url': 'https://api.github.com/users/sebastiantomac/followers', 'following_url': 'https://api.github.com/users/sebastiantomac/following{/other_user}', 'gists_url': 'https://api.github.com/users/sebastiantomac/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/sebastiantomac', 'id': 1225851, 'login': 'sebastiantomac', 'node_id': 'MDQ6VXNlcjEyMjU4NTE=', 'organizations_url': 'https://api.github.com/users/sebastiantomac/orgs', 'received_events_url': 'https://api.github.com/users/sebastiantomac/received_events', 'repos_url': 'https://api.github.com/users/sebastiantomac/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/sebastiantomac/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/sebastiantomac/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/sebastiantomac'} | [] | closed | false | null | [] | null | ['Which version of Apache Beam do you have (can you copy your full environment info here)?'
'apache-beam==2.23.0\r\nnlp==0.4.0\r\n\r\nFor me this was resolved by running the same python script on Linux (or really WSL). '
"Do you manage to run a dummy beam pipeline with python on windows ? \r\nYou can test a dummy pipeline with [this code](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/wordcount_minimal.py)\r\n\r\nIf you get the same error, it means that the issue comes from apache beam.\r\nOtherwise we'll investigate what went wrong here"
'Still, same error, so I guess it is on apache beam then. \r\nThanks for the investigation.'
'Thanks for trying\r\nLet us know if you find clues of what caused this issue, or if you find a fix'] | 2020-08-16 07:18:16+00:00 | 2020-09-01 14:53:33+00:00 | 2020-09-01 14:49:03+00:00 | NONE | null | I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner')
```
This fails in the apache beam runner.
```
Traceback (most recent call last):
File "D:/ML/wikiembedding/gpt2_sv.py", line 36, in <module>
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=my_cache_dir, beam_runner='DirectRunner')
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 462, in download_and_prepare
self._download_and_prepare(
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 969, in _download_and_prepare
pipeline_results = pipeline.run()
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\pipeline.py", line 534, in run
return self.runner.run_pipeline(self, self._options)
....
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 218, in process_encoded
self.output(decoded_value)
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\operations.py", line 332, in output
cython.cast(Receiver, self.receivers[output_index]).receive(windowed_value)
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\Cython\Shadow.py", line 167, in cast
return type(*args)
TypeError: Receiver() takes no arguments
```
This is run on a Windows 10 machine with python 3.8. I get the same error loading the swedish wikipedia dump. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/508/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/508/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/507/comments | https://api.github.com/repos/huggingface/datasets/issues/507/events | https://github.com/huggingface/datasets/issues/507 | 679,400,683 | MDU6SXNzdWU2Nzk0MDA2ODM= | 507 | Errors when I use | {'avatar_url': 'https://avatars.githubusercontent.com/u/30506151?v=4', 'events_url': 'https://api.github.com/users/mchari/events{/privacy}', 'followers_url': 'https://api.github.com/users/mchari/followers', 'following_url': 'https://api.github.com/users/mchari/following{/other_user}', 'gists_url': 'https://api.github.com/users/mchari/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/mchari', 'id': 30506151, 'login': 'mchari', 'node_id': 'MDQ6VXNlcjMwNTA2MTUx', 'organizations_url': 'https://api.github.com/users/mchari/orgs', 'received_events_url': 'https://api.github.com/users/mchari/received_events', 'repos_url': 'https://api.github.com/users/mchari/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/mchari/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mchari/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/mchari'} | [] | closed | false | null | [] | null | ['Looks like an issue with 3.0.2 transformers version. Works fine when I use "master" version of transformers.'] | 2020-08-14 21:03:57+00:00 | 2020-08-14 21:39:10+00:00 | 2020-08-14 21:39:10+00:00 | NONE | null | I tried the following example code from https://huggingface.co/deepset/roberta-base-squad2 and got errors
I am using **transformers 3.0.2** code .
from transformers.pipelines import pipeline
from transformers.modeling_auto import AutoModelForQuestionAnswering
from transformers.tokenization_auto import AutoTokenizer
model_name = "deepset/roberta-base-squad2"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
The errors are :
res = nlp(QA_input)
File ".local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in __call__
for s, e, score in zip(starts, ends, scores)
File ".local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in <listcomp>
for s, e, score in zip(starts, ends, scores)
KeyError: 0
| {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/507/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/507/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/506/comments | https://api.github.com/repos/huggingface/datasets/issues/506/events | https://github.com/huggingface/datasets/pull/506 | 679,164,788 | MDExOlB1bGxSZXF1ZXN0NDY3OTkwNjc2 | 506 | fix dataset.map for function without outputs | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | [] | 2020-08-14 13:40:22+00:00 | 2020-08-17 11:24:39+00:00 | 2020-08-17 11:24:38+00:00 | MEMBER | null | As noticed in #505 , giving a function that doesn't return anything in `.map` raises an error because of an unreferenced variable.
I fixed that and added tests.
Thanks @avloss for reporting | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/506/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/506/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/506.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/506', 'merged_at': '2020-08-17T11:24:38Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/506.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/506'} | true |
https://api.github.com/repos/huggingface/datasets/issues/505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/505/comments | https://api.github.com/repos/huggingface/datasets/issues/505/events | https://github.com/huggingface/datasets/pull/505 | 678,791,400 | MDExOlB1bGxSZXF1ZXN0NDY3NjgxMjY4 | 505 | tmp_file referenced before assignment | {'avatar_url': 'https://avatars.githubusercontent.com/u/17853685?v=4', 'events_url': 'https://api.github.com/users/avloss/events{/privacy}', 'followers_url': 'https://api.github.com/users/avloss/followers', 'following_url': 'https://api.github.com/users/avloss/following{/other_user}', 'gists_url': 'https://api.github.com/users/avloss/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/avloss', 'id': 17853685, 'login': 'avloss', 'node_id': 'MDQ6VXNlcjE3ODUzNjg1', 'organizations_url': 'https://api.github.com/users/avloss/orgs', 'received_events_url': 'https://api.github.com/users/avloss/received_events', 'repos_url': 'https://api.github.com/users/avloss/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/avloss/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/avloss/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/avloss'} | [] | closed | false | null | [] | null | ["Thanks for reporting the issue ! I'm creating a new PR to fix it and add tests.\r\n(I'm doing a new PR because I know there's some other place where it needs to be fixed)"
"I'm closing this one as I created the other PR."] | 2020-08-13 23:27:33+00:00 | 2020-08-14 13:42:46+00:00 | 2020-08-14 13:42:46+00:00 | NONE | null | Just learning about this library - so might've not set up all the flags correctly, but was getting this error about "tmp_file". | {'+1': 1, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 1, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/505/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/505/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/505.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/505', 'merged_at': None, 'patch_url': 'https://github.com/huggingface/datasets/pull/505.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/505'} | true |
https://api.github.com/repos/huggingface/datasets/issues/504 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/504/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/504/comments | https://api.github.com/repos/huggingface/datasets/issues/504/events | https://github.com/huggingface/datasets/pull/504 | 678,756,211 | MDExOlB1bGxSZXF1ZXN0NDY3NjUxOTA5 | 504 | Added downloading to Hyperpartisan news detection | {'avatar_url': 'https://avatars.githubusercontent.com/u/13795113?v=4', 'events_url': 'https://api.github.com/users/ghomasHudson/events{/privacy}', 'followers_url': 'https://api.github.com/users/ghomasHudson/followers', 'following_url': 'https://api.github.com/users/ghomasHudson/following{/other_user}', 'gists_url': 'https://api.github.com/users/ghomasHudson/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/ghomasHudson', 'id': 13795113, 'login': 'ghomasHudson', 'node_id': 'MDQ6VXNlcjEzNzk1MTEz', 'organizations_url': 'https://api.github.com/users/ghomasHudson/orgs', 'received_events_url': 'https://api.github.com/users/ghomasHudson/received_events', 'repos_url': 'https://api.github.com/users/ghomasHudson/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/ghomasHudson/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/ghomasHudson'} | [] | closed | false | null | [] | null | ['Thank you @ghomasHudson for making our dataset available! This is great!'
'The test passes since #527 :)'] | 2020-08-13 21:53:46+00:00 | 2020-08-27 08:18:41+00:00 | 2020-08-27 08:18:41+00:00 | CONTRIBUTOR | null | Following the discussion on Slack and #349, I've updated the hyperpartisan dataset to pull directly from Zenodo rather than manual install, which should make this dataset much more accessible. Many thanks to @johanneskiesel !
Currently doesn't pass `test_load_real_dataset` - I'm using `self.config.name` which is `default` in this test. Might be related to #474 | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/504/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/504/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/504.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/504', 'merged_at': '2020-08-27T08:18:41Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/504.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/504'} | true |
https://api.github.com/repos/huggingface/datasets/issues/503 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/503/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/503/comments | https://api.github.com/repos/huggingface/datasets/issues/503/events | https://github.com/huggingface/datasets/pull/503 | 678,726,538 | MDExOlB1bGxSZXF1ZXN0NDY3NjI3MTEw | 503 | CompGuessWhat?! 0.2.0 | {'avatar_url': 'https://avatars.githubusercontent.com/u/1479733?v=4', 'events_url': 'https://api.github.com/users/aleSuglia/events{/privacy}', 'followers_url': 'https://api.github.com/users/aleSuglia/followers', 'following_url': 'https://api.github.com/users/aleSuglia/following{/other_user}', 'gists_url': 'https://api.github.com/users/aleSuglia/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/aleSuglia', 'id': 1479733, 'login': 'aleSuglia', 'node_id': 'MDQ6VXNlcjE0Nzk3MzM=', 'organizations_url': 'https://api.github.com/users/aleSuglia/orgs', 'received_events_url': 'https://api.github.com/users/aleSuglia/received_events', 'repos_url': 'https://api.github.com/users/aleSuglia/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/aleSuglia/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/aleSuglia/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/aleSuglia'} | [] | closed | false | null | [] | null | ["I don't see any significant change in the dataset script (except the version value update), can you check that again please ?"
'Hi @aleSuglia , can you check that all the changes you wanted to do are in the dataset script ?'
"Hey sorry but I'm in the middle of a conference deadline. I'll let you know asap!"
'Ok np :)\r\nGood luck with your work for the conference'
"I finally managed to find some time to complete this. The only weird thing about this release is that I had to run the tests with the ignore checksum flag. Could it be because the Dropbox link doesn't change but the file does? Sorry didn't have the time to check the code to see what's happening behind the scenes.\r\n"
"Yes if the file changed, then the checksum verification won't pass as it expects to see the checksum of the old file.\r\nThe checksum is computed by hashing the complete file.\r\nYou can update the checksum by doing \r\n\r\n```\r\nnlp-cli test ./datasets/compguesswhat --save_infos --all_configs\r\n```"
'Any updates on this?'
"Hi :)\r\n\r\nI think what's left to do is\r\n1- rebase from master, since we changed the name of the library\r\n2- update the metadata file of the dataset using the command \r\n```\r\ndatasets-cli test ./datasets/compguesswhat --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\nThis command should update the checksum of the dropbox file"
"That's perfect. I'll have a look at it later today!" 'Nice thanks !'
"@lhoestq not sure why the quality check doesn't pass. Unfortunately CircleCI doesn't show the actual error. If I run `black` on my machine it works just fine. Ideas?"
'@lhoestq any updates? :) '
'Your version of `black` might be outdated, or you run using `black` instead of `make style` since it reformatted 100+ files.\r\nCould you try to update black, then `make style` ?'
'Yes I think my versions of isort and black were outdated. Thanks @lhoestq :)\r\n'
"It still doesn't look right in terms of line-length.\r\nAre you running `black` or `make style` ?"
"I'm running `make style`. This is the output of the command:\r\n\r\n```\r\nblack --line-length 119 --target-version py36 tests src benchmarks datasets metrics\r\nAll done! ✨ 🍰 ✨\r\n250 files left unchanged.\r\nisort tests src benchmarks datasets metrics\r\n```"
'Weird I have the same output without file changes with black `20.8b1` and isort `5.6.4` using `make style` too'
"I think that's because black doesn't revert the changes you first did with the old version.\r\nCould you open a new PR with only the ComGuessWhat files updated ? Hopefully now that black is up to date it should work directly (and to avoid 100+ files changes)"
'I will have a look at it tomorrow. Thanks for your help!'
"I'm closing this one and I'll open a new one."] | 2020-08-13 20:51:26+00:00 | 2020-10-21 06:54:29+00:00 | 2020-10-21 06:54:29+00:00 | CONTRIBUTOR | null | We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/503/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/503/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/503.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/503', 'merged_at': None, 'patch_url': 'https://github.com/huggingface/datasets/pull/503.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/503'} | true |
https://api.github.com/repos/huggingface/datasets/issues/502 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/502/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/502/comments | https://api.github.com/repos/huggingface/datasets/issues/502/events | https://github.com/huggingface/datasets/pull/502 | 678,546,070 | MDExOlB1bGxSZXF1ZXN0NDY3NDc1MDg0 | 502 | Fix tokenizers caching | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | ['This should fix #501 and also the issue you sent me on slack @sgugger .'] | 2020-08-13 15:53:37+00:00 | 2020-08-19 13:37:19+00:00 | 2020-08-19 13:37:18+00:00 | MEMBER | null | I've found some cases where the caching didn't work properly for tokenizers:
1. if a tokenizer has a regex pattern, then the caching would be inconsistent across sessions
2. if a tokenizer has a cache attribute that changes after some calls, the the caching would not work after cache updates
3. if a tokenizer is used inside a function, the caching of this function would result in the same cache file for different tokenizers
4. if `unique_no_split_tokens`'s attribute is not the same across sessions (after loading a tokenizer) then the caching could be inconsistent
To fix that, this is what I did:
1. register a specific `save_regex` function for pickle that makes regex dumps deterministic
2. ignore cache attribute of some tokenizers before dumping
3. enable recursive dump by default for all dumps
4. make `unique_no_split_tokens` deterministic in https://github.com/huggingface/transformers/pull/6461
I also added tests to make sure that tokenizers hashing works as expected.
In the future we should find a way to test if hashing also works across session (maybe using two CI jobs ? or by hardcoding a tokenizer's hash ?) | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/502/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/502/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/502.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/502', 'merged_at': '2020-08-19T13:37:17Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/502.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/502'} | true |
https://api.github.com/repos/huggingface/datasets/issues/501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/501/comments | https://api.github.com/repos/huggingface/datasets/issues/501/events | https://github.com/huggingface/datasets/issues/501 | 677,952,893 | MDU6SXNzdWU2Nzc5NTI4OTM= | 501 | Caching doesn't work for map (non-deterministic) | {'avatar_url': 'https://avatars.githubusercontent.com/u/8149933?v=4', 'events_url': 'https://api.github.com/users/wulu473/events{/privacy}', 'followers_url': 'https://api.github.com/users/wulu473/followers', 'following_url': 'https://api.github.com/users/wulu473/following{/other_user}', 'gists_url': 'https://api.github.com/users/wulu473/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/wulu473', 'id': 8149933, 'login': 'wulu473', 'node_id': 'MDQ6VXNlcjgxNDk5MzM=', 'organizations_url': 'https://api.github.com/users/wulu473/orgs', 'received_events_url': 'https://api.github.com/users/wulu473/received_events', 'repos_url': 'https://api.github.com/users/wulu473/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/wulu473/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/wulu473/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/wulu473'} | [] | closed | false | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [{'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'}] | null | ["Thanks for reporting !\r\n\r\nTo store the cache file, we compute a hash of the function given in `.map`, using our own hashing function.\r\nThe hash doesn't seem to stay the same over sessions for the tokenizer.\r\nApparently this is because of the regex at `tokenizer.pat` is not well supported by our hashing function.\r\n\r\nI'm working on a fix"
'Thanks everyone. Works great now.'
'Hi. I believe the fix was for the nlp library. Is there a solution to handle compiled regex expressions in .map() with the caching. I want to run a simple regex pattern on a big dataset, but I am running into the issue of compiled expression not being cached. \r\n\r\nInstead of opening a new issue, I thought I would put my query here. Let me know if a new issue would be more suitable. Thanks'
'Hi @MaveriQ! This fix is also included in the `datasets` library. Can you provide a reproducer?'] | 2020-08-12 20:20:07+00:00 | 2022-08-08 11:02:23+00:00 | 2020-08-24 16:34:35+00:00 | NONE | null | The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it.
```python
import nlp
import transformers
def main():
ds = nlp.load_dataset("reddit", split="train[:500]")
tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2")
def convert_to_features(example_batch):
input_str = example_batch["body"]
encodings = tokenizer(input_str, add_special_tokens=True, truncation=True)
return encodings
ds = ds.map(convert_to_features, batched=True)
if __name__ == "__main__":
main()
```
Roughly 3/10 times, this example recomputes the tokenization.
Is this expected behaviour? | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/501/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/501/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/500/comments | https://api.github.com/repos/huggingface/datasets/issues/500/events | https://github.com/huggingface/datasets/pull/500 | 677,841,708 | MDExOlB1bGxSZXF1ZXN0NDY2ODk0NTk0 | 500 | Use hnsw in wiki_dpr | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | [] | 2020-08-12 16:58:07+00:00 | 2020-08-20 07:59:19+00:00 | 2020-08-20 07:59:18+00:00 | MEMBER | null | The HNSW faiss index is much faster that regular Flat index. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/500/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/500/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/500.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/500', 'merged_at': '2020-08-20T07:59:18Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/500.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/500'} | true |
https://api.github.com/repos/huggingface/datasets/issues/499 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/499/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/499/comments | https://api.github.com/repos/huggingface/datasets/issues/499/events | https://github.com/huggingface/datasets/pull/499 | 677,709,938 | MDExOlB1bGxSZXF1ZXN0NDY2Nzg1MjAy | 499 | Narrativeqa (with full text) | {'avatar_url': 'https://avatars.githubusercontent.com/u/13795113?v=4', 'events_url': 'https://api.github.com/users/ghomasHudson/events{/privacy}', 'followers_url': 'https://api.github.com/users/ghomasHudson/followers', 'following_url': 'https://api.github.com/users/ghomasHudson/following{/other_user}', 'gists_url': 'https://api.github.com/users/ghomasHudson/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/ghomasHudson', 'id': 13795113, 'login': 'ghomasHudson', 'node_id': 'MDQ6VXNlcjEzNzk1MTEz', 'organizations_url': 'https://api.github.com/users/ghomasHudson/orgs', 'received_events_url': 'https://api.github.com/users/ghomasHudson/received_events', 'repos_url': 'https://api.github.com/users/ghomasHudson/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/ghomasHudson/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/ghomasHudson'} | [] | closed | false | null | [] | null | ["I took a look at the dummy data creation for this dataset.\r\n\r\nMaybe it didn't work on your side might be because `master.zip` and `narrativeqa_full_text.zip` are supposed to be directories and not acutal zip files in the dummy data folder.\r\n\r\nI managed to make it work with this `dummy_data.zip` file:\r\nhttps://drive.google.com/file/d/1G9ZHAjelazNApbFI0ep2dnSAWklXgGMd/view?usp=sharing"
"@lhoestq Hmmm wasn't that. Must have been something else I missed.\r\n\r\nHave committed your working version though now."
'Ok thanks.\r\nCould you rebase from master to fix the CI please ?'
'Hi @ghomasHudson, did you get the chance to add the test split and regenerate the dataset_infos.json file ?'
"> Hi @ghomasHudson, did you get the chance to add the test split and regenerate the dataset_infos.json file ?\r\n\r\nHave added the test set code but getting an OverflowError when trying to regen the dataset_infos.json:\r\n\r\n---\r\nOverflowError: There was an overflow in the <class 'pyarrow.lib.StructArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB\r\n\r\n---\r\n"
"Thanks for reporting @ghomasHudson , I'll look into it"
"It looks like it's an issue with Pyarrow.\r\nBy changing the `DEFAULT_MAX_BATCH_SIZE` to 1000 instead of 10 000 in `arrow_writer.py` I was able to run the command.\r\n\r\nBasically it seems that is an Arrow StructArray has more than 1-2GB of data, then it shuffles some of its content.\r\nI can't find any issue on Apache Arrow's JIRA about this problem. It will require more investigation.\r\n\r\nMaybe we can simply automatically decrease the writer's batch size when this happens. We can just check if the arrow array is more than a certain amount of bytes. "
"@lhoestq I've finally got round to regenerating the `dataset_infos.json` for this and adding all 3 splits. I've done this and updated for the new version of datasets.\r\n\r\nThe CI tests still aren't passing though (they pass on my machine). `test_load_dataset_narrativeqa` seems to fail but I have no idea how. Would appreciate if you have any ideas - would be great to finally finish this one!"
"The dummy data test fails, apparently it's because no examples are yielded for the dummy data.\r\n\r\nAlso it looks like the PR now show changes in many other files than the ones for NarrativeQA, could you create another branch and another PR please ?\r\n\r\nFeel free to ping me on the new PR so we can fi the dummy data together"] | 2020-08-12 13:49:43+00:00 | 2020-12-09 11:21:02+00:00 | 2020-12-09 11:21:02+00:00 | CONTRIBUTOR | null | Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.
Few notes:
- Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine.
- Can't get the dummy data to work. Currently putting stuff at:
```
dummy
|---- 0.0.0
|- dummy_data.zip
|-master.zip
| |- narrativeqa-master
| |- documents.csv
| |- qaps.csv
| |- third_party ......
|
| - narrativeqa_full_text.zip
| | - 001.content
| | - ....
```
Not sure what I'm messing up here (probably something obvious). | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/499/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/499/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/499.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/499', 'merged_at': None, 'patch_url': 'https://github.com/huggingface/datasets/pull/499.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/499'} | true |
https://api.github.com/repos/huggingface/datasets/issues/498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/498/comments | https://api.github.com/repos/huggingface/datasets/issues/498/events | https://github.com/huggingface/datasets/pull/498 | 677,597,479 | MDExOlB1bGxSZXF1ZXN0NDY2Njg5NTcy | 498 | dont use beam fs to save info for local cache dir | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | [] | 2020-08-12 11:00:00+00:00 | 2020-08-14 13:17:21+00:00 | 2020-08-14 13:17:20+00:00 | MEMBER | null | If the cache dir is local, then we shouldn't use beam's filesystem to save the dataset info
Fix #490
| {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 1, 'laugh': 0, 'rocket': 0, 'total_count': 1, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/498/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/498/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/498.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/498', 'merged_at': '2020-08-14T13:17:20Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/498.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/498'} | true |
https://api.github.com/repos/huggingface/datasets/issues/497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/497/comments | https://api.github.com/repos/huggingface/datasets/issues/497/events | https://github.com/huggingface/datasets/pull/497 | 677,057,116 | MDExOlB1bGxSZXF1ZXN0NDY2MjQ2NDQ3 | 497 | skip header in PAWS-X | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | [] | 2020-08-11 17:26:25+00:00 | 2020-08-19 09:50:02+00:00 | 2020-08-19 09:50:01+00:00 | MEMBER | null | This should fix #485
I also updated the `dataset_infos.json` file that is used to verify the integrity of the generated splits (the number of examples was reduced by one).
Note that there are new fields in `dataset_infos.json` introduced in the latest release 0.4.0 corresponding to post processing info. I removed them in this case when I ran `nlp-cli ./datasets/xtreme --save_infos` to keep backward compatibility (versions 0.3.0 can't load these fields).
I think I'll change the logic so that `nlp-cli test` doesn't create these fields for dataset with no post processing | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 1, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 1, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/497/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/497/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/497.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/497', 'merged_at': '2020-08-19T09:50:01Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/497.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/497'} | true |
https://api.github.com/repos/huggingface/datasets/issues/496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/496/comments | https://api.github.com/repos/huggingface/datasets/issues/496/events | https://github.com/huggingface/datasets/pull/496 | 677,016,998 | MDExOlB1bGxSZXF1ZXN0NDY2MjE1Mjg1 | 496 | fix bad type in overflow check | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | [] | 2020-08-11 16:24:58+00:00 | 2020-08-14 13:29:35+00:00 | 2020-08-14 13:29:34+00:00 | MEMBER | null | When writing an arrow file and inferring the features, the overflow check could fail if the first example had a `null` field.
This is because we were not using the inferred features to do this check, and we could end up with arrays that don't match because of a type mismatch (`null` vs `string` for example).
This should fix #482 | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/496/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/496/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/496.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/496', 'merged_at': '2020-08-14T13:29:34Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/496.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/496'} | true |
https://api.github.com/repos/huggingface/datasets/issues/495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/495/comments | https://api.github.com/repos/huggingface/datasets/issues/495/events | https://github.com/huggingface/datasets/pull/495 | 676,959,289 | MDExOlB1bGxSZXF1ZXN0NDY2MTY5MTA3 | 495 | stack vectors in pytorch and tensorflow | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | [] | 2020-08-11 15:12:53+00:00 | 2020-08-12 09:30:49+00:00 | 2020-08-12 09:30:48+00:00 | MEMBER | null | When the format of a dataset is set to pytorch or tensorflow, and if the dataset has vectors in it, they were not stacked together as tensors when calling `dataset[i:i + batch_size][column]` or `dataset[column]`.
I added support for stacked tensors for both pytorch and tensorflow.
For ragged tensors, they are stacked only for tensorflow as pytorch doesn't support ragged tensors.
| {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/495/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/495/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/495.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/495', 'merged_at': '2020-08-12T09:30:48Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/495.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/495'} | true |
https://api.github.com/repos/huggingface/datasets/issues/494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/494/comments | https://api.github.com/repos/huggingface/datasets/issues/494/events | https://github.com/huggingface/datasets/pull/494 | 676,886,955 | MDExOlB1bGxSZXF1ZXN0NDY2MTExOTQz | 494 | Fix numpy stacking | {'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/lhoestq', 'id': 42851186, 'login': 'lhoestq', 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/lhoestq'} | [] | closed | false | null | [] | null | ['This PR also fixed a bug where numpy arrays were returned instead of pytorch tensors when getting with a clumn as a key.'] | 2020-08-11 13:40:30+00:00 | 2020-08-11 14:56:50+00:00 | 2020-08-11 13:49:52+00:00 | MEMBER | null | When getting items using a column name as a key, numpy arrays were not stacked.
I fixed that and added some tests.
There is another issue that still needs to be fixed though: when getting items using a column name as a key, pytorch tensors are not stacked (it outputs a list of tensors). This PR should help with the to fix this issue. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/494/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/494/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/494.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/494', 'merged_at': '2020-08-11T13:49:52Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/494.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/494'} | true |
https://api.github.com/repos/huggingface/datasets/issues/493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/493/comments | https://api.github.com/repos/huggingface/datasets/issues/493/events | https://github.com/huggingface/datasets/pull/493 | 676,527,351 | MDExOlB1bGxSZXF1ZXN0NDY1ODIxOTA0 | 493 | Fix wmt zh-en url | {'avatar_url': 'https://avatars.githubusercontent.com/u/6045025?v=4', 'events_url': 'https://api.github.com/users/sshleifer/events{/privacy}', 'followers_url': 'https://api.github.com/users/sshleifer/followers', 'following_url': 'https://api.github.com/users/sshleifer/following{/other_user}', 'gists_url': 'https://api.github.com/users/sshleifer/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/sshleifer', 'id': 6045025, 'login': 'sshleifer', 'node_id': 'MDQ6VXNlcjYwNDUwMjU=', 'organizations_url': 'https://api.github.com/users/sshleifer/orgs', 'received_events_url': 'https://api.github.com/users/sshleifer/received_events', 'repos_url': 'https://api.github.com/users/sshleifer/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/sshleifer/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/sshleifer/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/sshleifer'} | [] | closed | false | null | [] | null | ["this doesn't work. I can decompress the file after download locally."] | 2020-08-11 02:14:52+00:00 | 2020-08-11 02:22:28+00:00 | 2020-08-11 02:22:12+00:00 | CONTRIBUTOR | null | I verified that
```
wget https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.00
```
runs in 2 minutes. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/493/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/493/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/493.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/493', 'merged_at': None, 'patch_url': 'https://github.com/huggingface/datasets/pull/493.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/493'} | true |
https://api.github.com/repos/huggingface/datasets/issues/492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/492/comments | https://api.github.com/repos/huggingface/datasets/issues/492/events | https://github.com/huggingface/datasets/issues/492 | 676,495,064 | MDU6SXNzdWU2NzY0OTUwNjQ= | 492 | nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema | {'avatar_url': 'https://avatars.githubusercontent.com/u/4564897?v=4', 'events_url': 'https://api.github.com/users/jarednielsen/events{/privacy}', 'followers_url': 'https://api.github.com/users/jarednielsen/followers', 'following_url': 'https://api.github.com/users/jarednielsen/following{/other_user}', 'gists_url': 'https://api.github.com/users/jarednielsen/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/jarednielsen', 'id': 4564897, 'login': 'jarednielsen', 'node_id': 'MDQ6VXNlcjQ1NjQ4OTc=', 'organizations_url': 'https://api.github.com/users/jarednielsen/orgs', 'received_events_url': 'https://api.github.com/users/jarednielsen/received_events', 'repos_url': 'https://api.github.com/users/jarednielsen/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/jarednielsen/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jarednielsen/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/jarednielsen'} | [] | closed | false | null | [] | null | ['In 0.4.0, the assertion in `concatenate_datasets ` is on the features, and not the schema.\r\nCould you try to update `nlp` ?\r\n\r\nAlso, since 0.4.0, you can use `dset_wikipedia.cast_(dset_books.features)` to avoid the schema cast hack.'
'Or maybe the assertion comes from elsewhere ?'
"I'm using the master branch. The assertion failure comes from the underlying `pa.concat_tables()`, which is in the pyarrow package. That method does check schemas.\r\n\r\nSince `features.type` does not contain information about nullable vs non-nullable features, the `cast_()` method won't resolve the schema mismatch. There is information in a schema which is not stored in features."
"I'm doing a refactor of type inference in #363 . Both text fields should match after that"
'By default nullable will be set to True'
'It should be good now. I was able to run\r\n\r\n```python\r\n>>> from nlp import concatenate_datasets, load_dataset\r\n>>>\r\n>>> bookcorpus = load_dataset("bookcorpus", split="train")\r\n>>> wiki = load_dataset("wikipedia", "20200501.en", split="train")\r\n>>> wiki.remove_columns_("title") # only keep the text\r\n>>>\r\n>>> assert bookcorpus.features.type == wiki.features.type\r\n>>> bert_dataset = concatenate_datasets([bookcorpus, wiki])\r\n```'
'Thanks!'] | 2020-08-11 00:27:46+00:00 | 2020-08-26 16:17:19+00:00 | 2020-08-26 16:17:19+00:00 | CONTRIBUTOR | null | Here's the code I'm trying to run:
```python
dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir)
dset_wikipedia.drop(columns=["title"])
dset_wikipedia.features.pop("title")
dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir)
dset = nlp.concatenate_datasets([dset_wikipedia, dset_books])
```
This fails because they have different schemas, despite having identical features.
```python
assert dset_wikipedia.features == dset_books.features # True
assert dset_wikipedia._data.schema == dset_books._data.schema # False
```
The Wikipedia dataset has 'text: string', while the BookCorpus dataset has 'text: string not null'. Currently I hack together a working schema match with the following line, but it would be better if this was handled in Features themselves.
```python
dset_wikipedia._data = dset_wikipedia.data.cast(dset_books._data.schema)
```
| {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/492/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/492/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/491/comments | https://api.github.com/repos/huggingface/datasets/issues/491/events | https://github.com/huggingface/datasets/issues/491 | 676,486,275 | MDU6SXNzdWU2NzY0ODYyNzU= | 491 | No 0.4.0 release on GitHub | {'avatar_url': 'https://avatars.githubusercontent.com/u/4564897?v=4', 'events_url': 'https://api.github.com/users/jarednielsen/events{/privacy}', 'followers_url': 'https://api.github.com/users/jarednielsen/followers', 'following_url': 'https://api.github.com/users/jarednielsen/following{/other_user}', 'gists_url': 'https://api.github.com/users/jarednielsen/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/jarednielsen', 'id': 4564897, 'login': 'jarednielsen', 'node_id': 'MDQ6VXNlcjQ1NjQ4OTc=', 'organizations_url': 'https://api.github.com/users/jarednielsen/orgs', 'received_events_url': 'https://api.github.com/users/jarednielsen/received_events', 'repos_url': 'https://api.github.com/users/jarednielsen/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/jarednielsen/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jarednielsen/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/jarednielsen'} | [] | closed | false | null | [] | null | ['I did the release on github, and updated the doc :)\r\nSorry for the delay'
'Thanks!'] | 2020-08-10 23:59:57+00:00 | 2020-08-11 16:50:07+00:00 | 2020-08-11 16:50:07+00:00 | CONTRIBUTOR | null | 0.4.0 was released on PyPi, but not on GitHub. This means [the documentation](https://huggingface.co/nlp/) is still displaying from 0.3.0, and that there's no tag to easily clone the 0.4.0 version of the repo. | {'+1': 1, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 1, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/491/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/491/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/490 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/490/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/490/comments | https://api.github.com/repos/huggingface/datasets/issues/490/events | https://github.com/huggingface/datasets/issues/490 | 676,482,242 | MDU6SXNzdWU2NzY0ODIyNDI= | 490 | Loading preprocessed Wikipedia dataset requires apache_beam | {'avatar_url': 'https://avatars.githubusercontent.com/u/4564897?v=4', 'events_url': 'https://api.github.com/users/jarednielsen/events{/privacy}', 'followers_url': 'https://api.github.com/users/jarednielsen/followers', 'following_url': 'https://api.github.com/users/jarednielsen/following{/other_user}', 'gists_url': 'https://api.github.com/users/jarednielsen/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/jarednielsen', 'id': 4564897, 'login': 'jarednielsen', 'node_id': 'MDQ6VXNlcjQ1NjQ4OTc=', 'organizations_url': 'https://api.github.com/users/jarednielsen/orgs', 'received_events_url': 'https://api.github.com/users/jarednielsen/received_events', 'repos_url': 'https://api.github.com/users/jarednielsen/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/jarednielsen/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jarednielsen/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/jarednielsen'} | [] | closed | false | null | [] | null | [] | 2020-08-10 23:46:50+00:00 | 2020-08-14 13:17:20+00:00 | 2020-08-14 13:17:20+00:00 | CONTRIBUTOR | null | Running
`nlp.load_dataset("wikipedia", "20200501.en", split="train", dir="/tmp/wikipedia")`
gives an error if apache_beam is not installed, stemming from
https://github.com/huggingface/nlp/blob/38eb2413de54ee804b0be81781bd65ac4a748ced/src/nlp/builder.py#L981-L988
This succeeded without the dependency in version 0.3.0. This seems like an unnecessary dependency to process some dataset info if you're using the already-preprocessed version. Could it be removed? | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/490/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/490/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/489/comments | https://api.github.com/repos/huggingface/datasets/issues/489/events | https://github.com/huggingface/datasets/issues/489 | 676,456,257 | MDU6SXNzdWU2NzY0NTYyNTc= | 489 | ug | {'avatar_url': 'https://avatars.githubusercontent.com/u/2000204?v=4', 'events_url': 'https://api.github.com/users/timothyjlaurent/events{/privacy}', 'followers_url': 'https://api.github.com/users/timothyjlaurent/followers', 'following_url': 'https://api.github.com/users/timothyjlaurent/following{/other_user}', 'gists_url': 'https://api.github.com/users/timothyjlaurent/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/timothyjlaurent', 'id': 2000204, 'login': 'timothyjlaurent', 'node_id': 'MDQ6VXNlcjIwMDAyMDQ=', 'organizations_url': 'https://api.github.com/users/timothyjlaurent/orgs', 'received_events_url': 'https://api.github.com/users/timothyjlaurent/received_events', 'repos_url': 'https://api.github.com/users/timothyjlaurent/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/timothyjlaurent/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/timothyjlaurent'} | [] | closed | false | null | [] | null | ['whoops' 'please delete this'] | 2020-08-10 22:33:03+00:00 | 2020-08-10 22:55:14+00:00 | 2020-08-10 22:33:40+00:00 | NONE | null | null | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/489/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/489/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/488 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/488/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/488/comments | https://api.github.com/repos/huggingface/datasets/issues/488/events | https://github.com/huggingface/datasets/issues/488 | 676,299,993 | MDU6SXNzdWU2NzYyOTk5OTM= | 488 | issues with downloading datasets for wmt16 and wmt19 | {'avatar_url': 'https://avatars.githubusercontent.com/u/10676103?v=4', 'events_url': 'https://api.github.com/users/stas00/events{/privacy}', 'followers_url': 'https://api.github.com/users/stas00/followers', 'following_url': 'https://api.github.com/users/stas00/following{/other_user}', 'gists_url': 'https://api.github.com/users/stas00/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/stas00', 'id': 10676103, 'login': 'stas00', 'node_id': 'MDQ6VXNlcjEwNjc2MTAz', 'organizations_url': 'https://api.github.com/users/stas00/orgs', 'received_events_url': 'https://api.github.com/users/stas00/received_events', 'repos_url': 'https://api.github.com/users/stas00/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/stas00/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/stas00/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/stas00'} | [] | closed | false | null | [] | null | ['I found `UNv1.0.en-ru.tar.gz` here: https://conferences.unite.un.org/uncorpus/en/downloadoverview, so it can be reconstructed with:\r\n```\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.00\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.01\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.02\r\ncat UNv1.0.en-ru.tar.gz.0* > UNv1.0.en-ru.tar.gz\r\n```\r\nit has other languages as well, in case https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/ is gone'
"Further, `nlp.load_dataset('wmt19', 'ru-en')` has only the `train` and `val` datasets. `test` is missing.\r\n\r\nFixed locally for summarization needs, by running:\r\n```\r\npip install sacrebleu\r\nsacrebleu -t wmt19 -l ru-en --echo src > test.source\r\nsacrebleu -t wmt19 -l ru-en --echo ref > test.target\r\n```\r\nh/t @sshleifer "
'Fixed in https://github.com/huggingface/datasets/pull/1912'] | 2020-08-10 17:32:51+00:00 | 2022-10-04 17:46:59+00:00 | 2022-10-04 17:46:58+00:00 | CONTRIBUTOR | null | I have encountered multiple issues while trying to:
```
import nlp
dataset = nlp.load_dataset('wmt16', 'ru-en')
metric = nlp.load_metric('wmt16')
```
1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and now it worked. So it must have been some outdated dependencies that `pip install -e ".[dev]" ` fixed.
2. it was downloading at 60kbs - almost 5 hours to get the dataset. It was downloading all pairs and not just the one I asked for.
I tried the same code with `wmt19` in parallel and it took a few secs to download and it only fetched data for the requested pair. (but it failed too, see below)
3. my machine has crushed and when I retried I got:
```
Traceback (most recent call last):
File "./download.py", line 9, in <module>
dataset = nlp.load_dataset('wmt16', 'ru-en')
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 549, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 449, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/stas/anaconda3/envs/main/lib/python3.7/contextlib.py", line 112, in __enter__
return next(self.gen)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/home/stas/anaconda3/envs/main/lib/python3.7/os.py", line 221, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/stas/.cache/huggingface/datasets/wmt16/ru-en/1.0.0/4d8269cdd971ed26984a9c0e4a158e0c7afc8135fac8fb8ee43ceecf38fd422d.incomplete'
```
it can't handle resumes. but neither allows a new start. Had to delete it manually.
4. and finally when it downloaded the dataset, it then failed to fetch the metrics:
```
Traceback (most recent call last):
File "./download.py", line 15, in <module>
metric = nlp.load_metric('wmt16')
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 442, in load_metric
module_path, hash = prepare_module(path, download_config=download_config, dataset=False)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 258, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 198, in cached_path
local_files_only=download_config.local_files_only,
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 356, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/metrics/wmt16/wmt16.py
```
5. If I run the same code with `wmt19`, it fails too:
```
ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz
``` | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/488/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/488/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/487 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/487/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/487/comments | https://api.github.com/repos/huggingface/datasets/issues/487/events | https://github.com/huggingface/datasets/pull/487 | 676,143,029 | MDExOlB1bGxSZXF1ZXN0NDY1NTA1NjQy | 487 | Fix elasticsearch result ids returning as strings | {'avatar_url': 'https://avatars.githubusercontent.com/u/3595526?v=4', 'events_url': 'https://api.github.com/users/sai-prasanna/events{/privacy}', 'followers_url': 'https://api.github.com/users/sai-prasanna/followers', 'following_url': 'https://api.github.com/users/sai-prasanna/following{/other_user}', 'gists_url': 'https://api.github.com/users/sai-prasanna/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/sai-prasanna', 'id': 3595526, 'login': 'sai-prasanna', 'node_id': 'MDQ6VXNlcjM1OTU1MjY=', 'organizations_url': 'https://api.github.com/users/sai-prasanna/orgs', 'received_events_url': 'https://api.github.com/users/sai-prasanna/received_events', 'repos_url': 'https://api.github.com/users/sai-prasanna/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/sai-prasanna/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/sai-prasanna/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/sai-prasanna'} | [] | closed | false | null | [] | null | ['It looks like you need to rebase from master to fix the CI. Could you do that please ?'] | 2020-08-10 13:37:11+00:00 | 2020-08-31 10:42:46+00:00 | 2020-08-31 10:42:46+00:00 | CONTRIBUTOR | null | I am using the latest elasticsearch binary and master of nlp. For me elasticsearch searches failed because the resultant "id_" returned for searches are strings, but our library assumes them to be integers. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/487/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/487/timeline | null | null | 0 | {'diff_url': 'https://github.com/huggingface/datasets/pull/487.diff', 'html_url': 'https://github.com/huggingface/datasets/pull/487', 'merged_at': '2020-08-31T10:42:46Z', 'patch_url': 'https://github.com/huggingface/datasets/pull/487.patch', 'url': 'https://api.github.com/repos/huggingface/datasets/pulls/487'} | true |
https://api.github.com/repos/huggingface/datasets/issues/486 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/486/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/486/comments | https://api.github.com/repos/huggingface/datasets/issues/486/events | https://github.com/huggingface/datasets/issues/486 | 675,649,034 | MDU6SXNzdWU2NzU2NDkwMzQ= | 486 | Bookcorpus data contains pretokenized text | {'avatar_url': 'https://avatars.githubusercontent.com/u/99543?v=4', 'events_url': 'https://api.github.com/users/orsharir/events{/privacy}', 'followers_url': 'https://api.github.com/users/orsharir/followers', 'following_url': 'https://api.github.com/users/orsharir/following{/other_user}', 'gists_url': 'https://api.github.com/users/orsharir/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/orsharir', 'id': 99543, 'login': 'orsharir', 'node_id': 'MDQ6VXNlcjk5NTQz', 'organizations_url': 'https://api.github.com/users/orsharir/orgs', 'received_events_url': 'https://api.github.com/users/orsharir/received_events', 'repos_url': 'https://api.github.com/users/orsharir/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/orsharir/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/orsharir/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/orsharir'} | [] | closed | false | null | [] | null | ["Yes indeed it looks like some `'` and spaces are missing (for example in `dont` or `didnt`).\r\nDo you know if there exist some copies without this issue ?\r\nHow would you fix this issue on the current data exactly ? I can see that the data is raw text (not tokenized) so I'm not sure I understand how you would do it. Could you provide more details ?"
'I\'m afraid that I don\'t know how to obtain the original BookCorpus data. I believe this version came from an anonymous Google Drive link posted in another issue.\r\n\r\nGoing through the raw text in this version, it\'s apparent that NLTK\'s TreebankWordTokenizer was applied on it (I gave some examples in my original post), followed by:\r\n`\' \'.join(tokens)`\r\nYou can retrieve the tokenization by splitting on whitespace. You can then "detokenize" it with TreebankWordDetokenizer class of NLTK (though, as I suggested, use the fixed version in my repo). This will bring the text closer to its original form, but some steps of TreebankWordTokenizer are destructive, so it wouldn\'t be one-to-one. Something along the lines of the following should work:\r\n```\r\ntreebank_detokenizer = nltk.tokenize.treebank.TreebankWordDetokenizer()\r\ndb = nlp.load_dataset(\'bookcorpus\', split=nlp.Split.TRAIN)\r\ndb = db.map(lambda x: treebank_detokenizer.detokenize(x[\'text\'].split()))\r\n```\r\n\r\nRegarding other issues beyond the above, I\'m afraid that I can\'t help with that.'
"Ok I get it, that would be very cool indeed\r\n\r\nWhat kinds of patterns the detokenizer can't retrieve ?"
'The TreebankTokenizer makes some assumptions about whitespace, parentheses, quotation marks, etc. For instance, while tokenizing the following text:\r\n```\r\nDwayne "The Rock" Johnson\r\n```\r\nwill result in:\r\n```\r\nDwayne `` The Rock \'\' Johnson\r\n```\r\nwhere the left and right quotation marks are turned into distinct symbols. Upon reconstruction, we can attach the left part to its token on the right, and respectively for the right part. However, the following texts would be tokenized exactly the same:\r\n```\r\nDwayne " The Rock " Johnson\r\nDwayne " The Rock" Johnson\r\nDwayne " The Rock" Johnson\r\n...\r\n```\r\nIn the above examples, the detokenizer would correct these inputs into the canonical text\r\n```\r\nDwayne "The Rock" Johnson\r\n```\r\nHowever, there are cases where there the solution cannot easily be inferred (at least without a true LM - this tokenizer is just a bunch of regexes). For instance, in cases where you have a fragment that contains the end of quote, but not its beginning, plus an accidental space:\r\n```\r\n... and it sounds fantastic, " he said.\r\n```\r\nIn the above case, the tokenizer would assume that the quotes refer to the next token, and so upon detokenization it will result in the following mistake:\r\n```\r\n... and it sounds fantastic, "he said.\r\n```\r\n\r\nWhile these are all odd edge cases (the basic assumptions do make sense), in noisy data they can occur, which is why I mentioned that the detokenizer cannot restore the original perfectly.\r\n'
'To confirm, since this is preprocessed, this was not the exact version of the Book Corpus used to actually train the models described here (particularly Distilbert)? https://huggingface.co/datasets/bookcorpus\r\n\r\nOr does this preprocessing exactly match that of the papers?'
'I believe these are just artifacts of this particular source. It might be better to crawl it again, or use another preprocessed source, as found here: https://github.com/soskek/bookcorpus '
'Yes actually the BookCorpus on hugginface is based on [this](https://github.com/soskek/bookcorpus/issues/24#issuecomment-643933352). And I kind of regret naming it as "BookCorpus" instead of something like "BookCorpusLike".\r\n\r\nBut there is a good news ! @shawwn has replicated BookCorpus in his way, and also provided a link to download the plain text files. see [here](https://github.com/soskek/bookcorpus/issues/27). There is chance we can have a "OpenBookCorpus" !'
'Resolved via #856'] | 2020-08-09 06:53:24+00:00 | 2022-10-04 17:44:33+00:00 | 2022-10-04 17:44:33+00:00 | CONTRIBUTOR | null | It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end quotes, respectively.
On my own projects, I just run the data through NLTK's TreebankWordDetokenizer to reverse the tokenization (as best as possible). I think it would be beneficial to apply this transformation directly on your remote cached copy of the dataset. If you choose to do so, I would also suggest to use my fork of NLTK that fixes several bugs in their detokenizer (I've opened a pull-request, but they've yet to respond): https://github.com/nltk/nltk/pull/2575 | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/486/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/486/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/485/comments | https://api.github.com/repos/huggingface/datasets/issues/485/events | https://github.com/huggingface/datasets/issues/485 | 675,595,393 | MDU6SXNzdWU2NzU1OTUzOTM= | 485 | PAWS dataset first item is header | {'avatar_url': 'https://avatars.githubusercontent.com/u/13238952?v=4', 'events_url': 'https://api.github.com/users/jxmorris12/events{/privacy}', 'followers_url': 'https://api.github.com/users/jxmorris12/followers', 'following_url': 'https://api.github.com/users/jxmorris12/following{/other_user}', 'gists_url': 'https://api.github.com/users/jxmorris12/gists{/gist_id}', 'gravatar_id': '', 'html_url': 'https://github.com/jxmorris12', 'id': 13238952, 'login': 'jxmorris12', 'node_id': 'MDQ6VXNlcjEzMjM4OTUy', 'organizations_url': 'https://api.github.com/users/jxmorris12/orgs', 'received_events_url': 'https://api.github.com/users/jxmorris12/received_events', 'repos_url': 'https://api.github.com/users/jxmorris12/repos', 'site_admin': False, 'starred_url': 'https://api.github.com/users/jxmorris12/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/jxmorris12/subscriptions', 'type': 'User', 'url': 'https://api.github.com/users/jxmorris12'} | [] | closed | false | null | [] | null | [] | 2020-08-08 22:05:25+00:00 | 2020-08-19 09:50:01+00:00 | 2020-08-19 09:50:01+00:00 | CONTRIBUTOR | null | ```
import nlp
dataset = nlp.load_dataset('xtreme', 'PAWS-X.en')
dataset['test'][0]
```
prints the following
```
{'label': 'label', 'sentence1': 'sentence1', 'sentence2': 'sentence2'}
```
dataset['test'][0] should probably be the first item in the dataset, not just a dictionary mapping the column names to themselves. Probably just need to ignore the first row in the dataset by default or something like that. | {'+1': 0, '-1': 0, 'confused': 0, 'eyes': 0, 'heart': 0, 'hooray': 0, 'laugh': 0, 'rocket': 0, 'total_count': 0, 'url': 'https://api.github.com/repos/huggingface/datasets/issues/485/reactions'} | https://api.github.com/repos/huggingface/datasets/issues/485/timeline | null | completed | null | null | false |