url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.06B
| node_id
stringlengths 18
32
| number
int64 1
3.3k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,637B
| updated_at
int64 1,587B
1,637B
| closed_at
int64 1,587B
1,637B
β | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
β | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3095 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3095/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3095/comments | https://api.github.com/repos/huggingface/datasets/issues/3095/events | https://github.com/huggingface/datasets/issues/3095 | 1,027,453,146 | I_kwDODunzps49PbDa | 3,095 | `cast_column` makes audio decoding fail | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"cc @anton-l @albertvillanova ",
"Thanks for reporting, @patrickvonplaten.\r\n\r\nI think the issue is related to mp3 resampling, not to `cast_column`.\r\n\r\nYou can check that `cast_column` works OK with non-mp3 audio files:\r\n```python\r\nfrom datasets import load_dataset\r\nimport datasets\r\nds = load_dataset(\"arabic_speech_corpus\", split=\"train\")\r\nds = ds.cast_column(\"audio\", datasets.features.Audio(sampling_rate=16_000))\r\nprint(ds[0][\"audio\"])\r\n```\r\n\r\nI'm fixing it."
] | 1,634,305,018,000 | 1,634,312,310,000 | 1,634,312,310,000 | MEMBER | null | null | null | ## Describe the bug
After changing the sampling rate automatic decoding fails.
## Steps to reproduce the bug
```python
from datasets import load_dataset
import datasets
ds = load_dataset("common_voice", "ab", split="train")
ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000))
print(ds[0]["audio"]) # <- this fails currently
```
yields:
```
TypeError: forward() takes 2 positional arguments but 4 were given
```
## Expected results
no failure
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 1.13.2 (master)
- Platform: Linux-5.11.0-1019-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3095/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3094 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3094/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3094/comments | https://api.github.com/repos/huggingface/datasets/issues/3094/events | https://github.com/huggingface/datasets/issues/3094 | 1,027,328,633 | I_kwDODunzps49O8p5 | 3,094 | Support loading a dataset from SQLite files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,634,295,521,000 | 1,634,295,521,000 | null | MEMBER | null | null | null | As requested by @julien-c, we could eventually support loading a dataset from SQLite files, like it is the case for JSON/CSV files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3094/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3093 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3093/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3093/comments | https://api.github.com/repos/huggingface/datasets/issues/3093/events | https://github.com/huggingface/datasets/issues/3093 | 1,027,262,124 | I_kwDODunzps49Osas | 3,093 | Error loading json dataset with multiple splits if keys in nested dicts have a different order | {
"login": "dthulke",
"id": 8331189,
"node_id": "MDQ6VXNlcjgzMzExODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8331189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dthulke",
"html_url": "https://github.com/dthulke",
"followers_url": "https://api.github.com/users/dthulke/followers",
"following_url": "https://api.github.com/users/dthulke/following{/other_user}",
"gists_url": "https://api.github.com/users/dthulke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dthulke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dthulke/subscriptions",
"organizations_url": "https://api.github.com/users/dthulke/orgs",
"repos_url": "https://api.github.com/users/dthulke/repos",
"events_url": "https://api.github.com/users/dthulke/events{/privacy}",
"received_events_url": "https://api.github.com/users/dthulke/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi, \r\n\r\neven Pandas, which is less strict compared to PyArrow when it comes to reading JSON, doesn't support different orderings:\r\n```python\r\nimport io\r\nimport pandas as pd\r\n\r\ns = \"\"\"\r\n{\"a\": {\"c\": 8, \"b\": 5}}\r\n{\"a\": {\"b\": 7, \"c\": 6}}\r\n\"\"\"\r\n\r\nbuffer = io.StringIO(s)\r\ndf = pd.read_json(buffer, lines=True)\r\n\r\nprint(df.shape[0]) # 0\r\n```\r\n\r\nSo we can't even fall back to Pandas in such cases.\r\n\r\nIt seems the only option is a script that recursively re-orders fields to enforce deterministic order:\r\n```python\r\nwith open(\"train.json\", \"r\") as fin:\r\n with open(\"train_reordered.json\", \"w\") as fout:\r\n for line in fin:\r\n obj_jsonl = json.loads(line.strip())\r\n fout.write(json.dumps(obj_jsonl, sort_keys=True) + \"\\n\")\r\n```"
] | 1,634,290,405,000 | 1,636,550,434,000 | null | NONE | null | null | null | ## Describe the bug
Loading a json dataset with multiple splits that have nested dicts with keys in different order results in the error below.
If the keys in the nested dicts always have the same order or even if you just load a single split in which the nested dicts don't have the same order, everything works fine.
## Steps to reproduce the bug
Create two json files:
train.json
```
{"a": {"c": 8, "b": 5}}
{"a": {"b": 7, "c": 6}}
```
test.json
```
{"a": {"b": 1, "c": 2}}
{"a": {"b": 3, "c": 4}}
```
```python
from datasets import load_dataset
# Loading the files individually works (even though the keys in train.json don't have the same order)
load_dataset('json', data_files={"test": "test.json"})
load_dataset('json', data_files={"train": "train.json"})
# Loading both splits fails
load_dataset('json', data_files={"train": "train.json", "test": "test.json"})
```
## Expected results
Loading both splits should not give an error whether the nested dicts are have the same order or not.
## Actual results
```
>>> load_dataset('json', data_files={"train": "train.json", "test": "test.json"})
Using custom data configuration default-f1bc76fd07398c4c
Downloading and preparing dataset json/default to /home/dthulke/.cache/huggingface/datasets/json/default-f1bc76fd07398c4c/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426...
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 8839.42it/s]
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 477.82it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/load.py", line 1632, in load_dataset
use_auth_token=use_auth_token,
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/builder.py", line 608, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/builder.py", line 697, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/builder.py", line 1159, in _prepare_split
writer.write_table(table)
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/arrow_writer.py", line 428, in write_table
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "pyarrow/table.pxi", line 1596, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 592, in pyarrow.lib._sanitize_arrays
File "pyarrow/array.pxi", line 329, in pyarrow.lib.asarray
File "pyarrow/table.pxi", line 277, in pyarrow.lib.ChunkedArray.cast
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/pyarrow/compute.py", line 297, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 527, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 337, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 120, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct
```
## Environment info
- `datasets` version: 1.13.2
- Platform: Linux-4.15.0-147-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3093/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3092/comments | https://api.github.com/repos/huggingface/datasets/issues/3092/events | https://github.com/huggingface/datasets/pull/3092 | 1,027,260,383 | PR_kwDODunzps4tPj6e | 3,092 | Fix JNLBA dataset | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Fix #3089.",
"@albertvillanova all tests are passing now. Either you or @lhoestq can review it!"
] | 1,634,290,274,000 | 1,634,891,037,000 | 1,634,891,037,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3092",
"html_url": "https://github.com/huggingface/datasets/pull/3092",
"diff_url": "https://github.com/huggingface/datasets/pull/3092.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3092.patch",
"merged_at": 1634891037000
} | As mentioned in #3089, I've added more tags and also updated the link for dataset which was earlier using a Google Drive link.
I'm having problem with generating dummy data as `datasets-cli dummy_data ./datasets/jnlpba --auto_generate --match_text_files "*.iob2"` is giving `datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
` error. I'll try to add dummy data manually. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3092/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3092/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3091 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3091/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3091/comments | https://api.github.com/repos/huggingface/datasets/issues/3091/events | https://github.com/huggingface/datasets/issues/3091 | 1,027,251,530 | I_kwDODunzps49Op1K | 3,091 | `blog_authorship_corpus` is broken | {
"login": "fdtomasi",
"id": 12514317,
"node_id": "MDQ6VXNlcjEyNTE0MzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/12514317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fdtomasi",
"html_url": "https://github.com/fdtomasi",
"followers_url": "https://api.github.com/users/fdtomasi/followers",
"following_url": "https://api.github.com/users/fdtomasi/following{/other_user}",
"gists_url": "https://api.github.com/users/fdtomasi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fdtomasi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fdtomasi/subscriptions",
"organizations_url": "https://api.github.com/users/fdtomasi/orgs",
"repos_url": "https://api.github.com/users/fdtomasi/repos",
"events_url": "https://api.github.com/users/fdtomasi/events{/privacy}",
"received_events_url": "https://api.github.com/users/fdtomasi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @fdtomasi, thanks for reporting.\r\n\r\nYou are right: the original host data URL does no longer exist.\r\n\r\nI've contacted the authors of the dataset to ask them if they host this dataset in another URL.",
"Hi, @fdtomasi, the URL is fixed.\r\n\r\nThe fix is already in our master branch and it will be accessible in our next release.\r\n\r\nIn the meantime, you can include the fix if you install the `datasets` library from the master branch:\r\n```\r\npip install -U git+ssh://git@github.com/huggingface/datasets.git@master#egg=datasest\r\n```\r\nor\r\n```\r\npip install -U git+https://github.com/huggingface/datasets.git@master#egg=datasets\r\n```",
"Awesome thank you so much for the quick fix!"
] | 1,634,289,640,000 | 1,634,648,770,000 | 1,634,647,839,000 | NONE | null | null | null | ## Describe the bug
The dataset `blog_authorship_corpus` is broken.
By bypassing the checksum checks, the loading does not return any error but the resulting dataset is empty.
I suspect it is because the data download url is broken (http://www.cs.biu.ac.il/~koppel/blogs/blogs.zip).
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("blog_authorship_corpus", split="train", download_mode='force_redownload')
```
## Expected results
No error.
## Actual results
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
/tmp/ipykernel_5237/1729238701.py in <module>
2 ds = load_dataset(
3 "blog_authorship_corpus", split="train",
----> 4 download_mode='force_redownload'
5 )
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
1115 ignore_verifications=ignore_verifications,
1116 try_from_hf_gcs=try_from_hf_gcs,
-> 1117 use_auth_token=use_auth_token,
1118 )
1119
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
635 if not downloaded_from_gcs:
636 self._download_and_prepare(
--> 637 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
638 )
639 # Sync info
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
707 if verify_infos:
708 verify_checksums(
--> 709 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
710 )
711
/opt/conda/lib/python3.7/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['http://www.cs.biu.ac.il/~koppel/blogs/blogs.zip']
```
## Environment info
- `datasets` version: 1.13.2
- Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11
- Python version: 3.7.10
- PyArrow version: 5.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3091/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3090/comments | https://api.github.com/repos/huggingface/datasets/issues/3090/events | https://github.com/huggingface/datasets/pull/3090 | 1,027,100,371 | PR_kwDODunzps4tPEtH | 3,090 | Update BibTeX entry | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,276,367,000 | 1,634,283,357,000 | 1,634,283,357,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3090",
"html_url": "https://github.com/huggingface/datasets/pull/3090",
"diff_url": "https://github.com/huggingface/datasets/pull/3090.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3090.patch",
"merged_at": 1634283357000
} | Update BibTeX entry. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3090/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3089/comments | https://api.github.com/repos/huggingface/datasets/issues/3089/events | https://github.com/huggingface/datasets/issues/3089 | 1,026,973,360 | I_kwDODunzps49Nl6w | 3,089 | JNLPBA Dataset | {
"login": "sciarrilli",
"id": 10460111,
"node_id": "MDQ6VXNlcjEwNDYwMTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/10460111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sciarrilli",
"html_url": "https://github.com/sciarrilli",
"followers_url": "https://api.github.com/users/sciarrilli/followers",
"following_url": "https://api.github.com/users/sciarrilli/following{/other_user}",
"gists_url": "https://api.github.com/users/sciarrilli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sciarrilli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sciarrilli/subscriptions",
"organizations_url": "https://api.github.com/users/sciarrilli/orgs",
"repos_url": "https://api.github.com/users/sciarrilli/repos",
"events_url": "https://api.github.com/users/sciarrilli/events{/privacy}",
"received_events_url": "https://api.github.com/users/sciarrilli/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"# Steps to reproduce\r\n\r\nTo reproduce:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('jnlpba')\r\n\r\ndataset['train'].features['ner_tags']\r\n```\r\nOutput:\r\n```python\r\nSequence(feature=ClassLabel(num_classes=3, names=['O', 'B', 'I'], names_file=None, id=None), length=-1, id=None)\r\n```\r\n\r\n",
"Since I cannot create a branch here is the updated code:\r\n\r\n```python\r\n\r\n# coding=utf-8\r\n# Copyright 2020 HuggingFace Datasets Authors.\r\n#\r\n# Licensed under the Apache License, Version 2.0 (the \"License\");\r\n# you may not use this file except in compliance with the License.\r\n# You may obtain a copy of the License at\r\n#\r\n# http://www.apache.org/licenses/LICENSE-2.0\r\n#\r\n# Unless required by applicable law or agreed to in writing, software\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n# See the License for the specific language governing permissions and\r\n# limitations under the License.\r\n\r\n# Lint as: python3\r\n\"\"\"Introduction to the Bio-Entity Recognition Task at JNLPBA\"\"\"\r\n\r\nimport os\r\n\r\nimport datasets\r\n\r\n\r\nlogger = datasets.logging.get_logger(__name__)\r\n\r\n\r\n_CITATION = \"\"\"\\\r\n@inproceedings{kim2004introduction,\r\n title={Introduction to the bio-entity recognition task at JNLPBA},\r\n author={Kim, Jin-Dong and Ohta, Tomoko and Tsuruoka, Yoshimasa and Tateisi, Yuka and Collier, Nigel},\r\n booktitle={Proceedings of the international joint workshop on natural language processing in biomedicine and its applications},\r\n pages={70--75},\r\n year={2004},\r\n organization={Citeseer}\r\n}\r\n\"\"\"\r\n\r\n_DESCRIPTION = \"\"\"\\\r\nThe data came from the GENIA version 3.02 corpus (Kim et al., 2003). This was formed from a controlled search\r\non MEDLINE using the MeSH terms \u0018human\u0019, \u0018blood cells\u0019 and \u0018transcription factors\u0019. From this search 2,000 abstracts\r\nwere selected and hand annotated according to a small taxonomy of 48 classes based on a chemical classification.\r\nAmong the classes, 36 terminal classes were used to annotate the GENIA corpus.\r\n\"\"\"\r\n\r\n_HOMEPAGE = \"http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004\"\r\n_TRAIN_URL = \"http://www.nactem.ac.uk/GENIA/current/Shared-tasks/JNLPBA/Train/Genia4ERtraining.tar.gz\"\r\n_VAL_URL = 'http://www.nactem.ac.uk/GENIA/current/Shared-tasks/JNLPBA/Evaluation/Genia4ERtest.tar.gz'\r\n\r\n\r\n_URLS = {\r\n \"train\": _TRAIN_URL,\r\n \"val\": _VAL_URL,\r\n}\r\n\r\n_TRAIN_DIRECTORY = \"Genia4ERtraining\"\r\n_VAL_DIRECTORY = \"Genia4ERtest\"\r\n\r\n_TRAIN_FILE = \"Genia4ERtask1.iob2\"\r\n_VAL_FILE = \"Genia4EReval1.iob2\"\r\n\r\n\r\nclass JNLPBAConfig(datasets.BuilderConfig):\r\n \"\"\"BuilderConfig for JNLPBA\"\"\"\r\n\r\n def __init__(self, **kwargs):\r\n \"\"\"BuilderConfig for JNLPBA.\r\n Args:\r\n **kwargs: keyword arguments forwarded to super.\r\n \"\"\"\r\n super(JNLPBAConfig, self).__init__(**kwargs)\r\n\r\n\r\nclass JNLPBA(datasets.GeneratorBasedBuilder):\r\n \"\"\"JNLPBA dataset.\"\"\"\r\n\r\n BUILDER_CONFIGS = [\r\n JNLPBAConfig(name=\"jnlpba\", version=datasets.Version(\"1.0.0\"), description=\"JNLPBA dataset\"),\r\n ]\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"id\": datasets.Value(\"string\"),\r\n \"tokens\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"ner_tags\": datasets.Sequence(\r\n datasets.features.ClassLabel(\r\n names=[\r\n 'O',\r\n 'B-DNA',\r\n 'I-DNA', \r\n 'B-RNA',\r\n 'I-RNA',\r\n 'B-cell_line',\r\n 'I-cell_line',\r\n 'B-cell_type',\r\n 'I-cell_type',\r\n 'B-protein',\r\n 'I-protein',\r\n ]\r\n )\r\n ),\r\n }\r\n ),\r\n supervised_keys=None,\r\n homepage=_HOMEPAGE,\r\n citation=_CITATION,\r\n )\r\n\r\n def _split_generators(self, dl_manager):\r\n downloaded_files = dl_manager.download_and_extract(_URLS)\r\n \r\n return [\r\n datasets.SplitGenerator(name=datasets.Split.TRAIN, \r\n gen_kwargs={\"filepath\": os.path.join(downloaded_files['train'], _TRAIN_FILE)}),\r\n datasets.SplitGenerator(name=datasets.Split.VALIDATION, \r\n gen_kwargs={\"filepath\": os.path.join(downloaded_files['val'], _VAL_FILE)})\r\n ]\r\n \r\n\r\n def _generate_examples(self, filepath):\r\n logger.info(\"β³ Generating examples from = %s\", filepath)\r\n with open(filepath, encoding=\"utf-8\") as f:\r\n guid = 0\r\n tokens = []\r\n ner_tags = []\r\n for line in f:\r\n if line.startswith('###'):\r\n continue\r\n if line == '' or line == '\\n':\r\n if tokens:\r\n yield guid, {\r\n \"id\": str(guid),\r\n \"tokens\": tokens,\r\n \"ner_tags\": ner_tags,\r\n }\r\n guid += 1\r\n tokens = []\r\n ner_tags = []\r\n else:\r\n # tokens are tab separated\r\n splits = line.split(\"\\t\")\r\n #print(splits)\r\n #print(len(splits))\r\n if len(splits) < 2:\r\n splits = splits[0].split()\r\n tokens.append(splits[0])\r\n ner_tags.append(splits[1].rstrip())\r\n # last example\r\n yield guid, {\r\n \"id\": str(guid),\r\n \"tokens\": tokens,\r\n \"ner_tags\": ner_tags,\r\n }\r\n```"
] | 1,634,260,562,000 | 1,634,891,037,000 | 1,634,891,037,000 | NONE | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
The dataset loading script for this dataset is incorrect. This is a biomedical dataset used for named entity recognition. The entities in the [script](https://github.com/huggingface/datasets/blob/master/datasets/jnlpba/jnlpba.py#L81-L83) are: O, B, and I. The correct entities from the original data file are:
['O',
'B-DNA',
'I-DNA',
'B-RNA',
'I-RNA',
'B-cell_line',
'I-cell_line',
'B-cell_type',
'I-cell_type',
'B-protein',
'I-protein']
## Actual results
The dataset loader script needs to include the following NER names:
['O',
'B-DNA',
'I-DNA',
'B-RNA',
'I-RNA',
'B-cell_line',
'I-cell_line',
'B-cell_type',
'I-cell_type',
'B-protein',
'I-protein']
And the [data](https://github.com/huggingface/datasets/blob/master/datasets/jnlpba/jnlpba.py#L46) that is being pulled has been modified from the original dataset and does not include the original NER tags.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3089/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3088/comments | https://api.github.com/repos/huggingface/datasets/issues/3088/events | https://github.com/huggingface/datasets/pull/3088 | 1,026,920,369 | PR_kwDODunzps4tOhRx | 3,088 | Use template column_mapping to transmit_format instead of template features | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for fixing!"
] | 1,634,255,380,000 | 1,634,308,805,000 | 1,634,292,664,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3088",
"html_url": "https://github.com/huggingface/datasets/pull/3088",
"diff_url": "https://github.com/huggingface/datasets/pull/3088.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3088.patch",
"merged_at": 1634292664000
} | Use `template.column_mapping` to check for modified columns since `template.features` represent a generic template/column mapping.
Fix #3087
TODO:
- [x] Add a test | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3088/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3088/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3087/comments | https://api.github.com/repos/huggingface/datasets/issues/3087/events | https://github.com/huggingface/datasets/issues/3087 | 1,026,780,469 | I_kwDODunzps49M201 | 3,087 | Removing label column in a text classification dataset yields to errors | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,634,242,370,000 | 1,634,292,664,000 | 1,634,292,664,000 | MEMBER | null | null | null | ## Describe the bug
This looks like #3059 but it's not linked to the cache this time. Removing the `label` column from a text classification dataset and then performing any processing will result in an error.
To reproduce:
```py
from datasets import load_dataset
from transformers import AutoTokenizer
raw_datasets = load_dataset("imdb")
raw_datasets = raw_datasets.remove_columns("label")
model_checkpoint = "distilbert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
context_length = 128
def tokenize_pad_and_truncate(texts):
return tokenizer(texts["text"], truncation=True, padding="max_length", max_length=context_length)
tokenized_datasets = raw_datasets.map(tokenize_pad_and_truncate, batched=True)
```
Traceback:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-1-ba61bb32f786> in <module>
12 return tokenizer(texts["text"], truncation=True, padding="max_length", max_length=context_length)
13
---> 14 tokenized_datasets = raw_datasets.map(tokenize_pad_and_truncate, batched=True)
~/git/datasets/src/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc)
500 desc=desc,
501 )
--> 502 for k, dataset in self.items()
503 }
504 )
~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0)
500 desc=desc,
501 )
--> 502 for k, dataset in self.items()
503 }
504 )
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2051 new_fingerprint=new_fingerprint,
2052 disable_tqdm=disable_tqdm,
-> 2053 desc=desc,
2054 )
2055 else:
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
501 self: "Dataset" = kwargs.pop("self")
502 # apply actual function
--> 503 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
504 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
505 for dataset in datasets:
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
468 }
469 # apply actual function
--> 470 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
471 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
472 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
404 # Call actual function
405
--> 406 out = func(self, *args, **kwargs)
407
408 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2243 if os.path.exists(cache_file_name) and load_from_cache_file:
2244 logger.warning("Loading cached processed dataset at %s", cache_file_name)
-> 2245 info = self.info.copy()
2246 info.features = features
2247 info.task_templates = None
~/git/datasets/src/datasets/info.py in copy(self)
278
279 def copy(self) -> "DatasetInfo":
--> 280 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
281
282
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
177 for idx, template in enumerate(self.task_templates):
178 if isinstance(template, TextClassification):
--> 179 labels = self.features[template.label_column].names
180 self.task_templates[idx] = TextClassification(
181 text_column=template.text_column, label_column=template.label_column, labels=labels
KeyError: 'label'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3087/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3086/comments | https://api.github.com/repos/huggingface/datasets/issues/3086/events | https://github.com/huggingface/datasets/pull/3086 | 1,026,481,905 | PR_kwDODunzps4tNIvp | 3,086 | Remove _resampler from Audio fields | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,222,330,000 | 1,634,224,421,000 | 1,634,224,420,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3086",
"html_url": "https://github.com/huggingface/datasets/pull/3086",
"diff_url": "https://github.com/huggingface/datasets/pull/3086.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3086.patch",
"merged_at": 1634224420000
} | The `_resampler` Audio attribute was implemented to optimize audio resampling, but it should not be cached.
This PR removes `_resampler` from Audio fields, so that it is not returned by `fields()` or `asdict()`.
Fix #3083. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3086/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3085 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3085/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3085/comments | https://api.github.com/repos/huggingface/datasets/issues/3085/events | https://github.com/huggingface/datasets/pull/3085 | 1,026,467,384 | PR_kwDODunzps4tNFza | 3,085 | Fixes to `to_tf_dataset` | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Can you give some details about why you need these changes ?",
"Hey, sorry, I should have explained! I've been getting a lot of `VisibleDeprecationWarning` from Numpy, due to an issue in the formatter, see #3084 . This is a temporary workaround (since I'm using these methods in the upcoming course) until I can fix that issue, because I couldn't see an obvious fix for the Numpy formatter. If you can see a quick way to fix that, though, that might be even better!"
] | 1,634,221,556,000 | 1,634,828,729,000 | 1,634,828,728,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3085",
"html_url": "https://github.com/huggingface/datasets/pull/3085",
"diff_url": "https://github.com/huggingface/datasets/pull/3085.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3085.patch",
"merged_at": 1634828728000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3085/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3084/comments | https://api.github.com/repos/huggingface/datasets/issues/3084/events | https://github.com/huggingface/datasets/issues/3084 | 1,026,428,992 | I_kwDODunzps49LhBA | 3,084 | VisibleDeprecationWarning when using `set_format("numpy")` | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I just opened a PR and I verified that the code you provided doesn't show any deprecation warning :)"
] | 1,634,219,581,000 | 1,634,918,654,000 | 1,634,918,654,000 | CONTRIBUTOR | null | null | null | Code to reproduce:
```
from datasets import load_dataset
dataset = load_dataset("glue", "mnli")
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-cased')
def tokenize_function(dataset):
return tokenizer(dataset['premise'])
tokenized_datasets = dataset.map(tokenize_function, batched=True, remove_columns=dataset['train'].features)
tokenized_datasets.set_format("numpy")
tokenized_datasets['train'][5:8]
```
Outputs:
```
python3.9/site-packages/datasets/formatting/formatting.py:167: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return np.array(array, copy=False, **self.np_array_kwargs)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3084/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3083/comments | https://api.github.com/repos/huggingface/datasets/issues/3083/events | https://github.com/huggingface/datasets/issues/3083 | 1,026,397,062 | I_kwDODunzps49LZOG | 3,083 | Datasets with Audio feature raise error when loaded from cache due to _resampler parameter | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,634,217,833,000 | 1,634,224,420,000 | 1,634,224,420,000 | MEMBER | null | null | null | ## Describe the bug
As reported by @patrickvonplaten, when loaded from the cache, datasets containing the Audio feature raise TypeError.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# load first time works
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
# load from cache breaks
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
```
## Actual results
```
TypeError: __init__() got an unexpected keyword argument '_resampler'
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3083/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3082/comments | https://api.github.com/repos/huggingface/datasets/issues/3082/events | https://github.com/huggingface/datasets/pull/3082 | 1,026,388,994 | PR_kwDODunzps4tM2BV | 3,082 | Fix error related to huggingface_hub timeout parameter | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,217,467,000 | 1,634,222,392,000 | 1,634,222,391,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3082",
"html_url": "https://github.com/huggingface/datasets/pull/3082",
"diff_url": "https://github.com/huggingface/datasets/pull/3082.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3082.patch",
"merged_at": 1634222391000
} | The `huggingface_hub` package added the parameter `timeout` from version 0.0.19.
This PR bumps this minimal version.
Fix #3080. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3082/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3081 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3081/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3081/comments | https://api.github.com/repos/huggingface/datasets/issues/3081/events | https://github.com/huggingface/datasets/pull/3081 | 1,026,383,749 | PR_kwDODunzps4tM1Gy | 3,081 | [Audio datasets] Adapting all audio datasets | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq - are there other important speech datasets that I'm forgetting here? \r\n\r\nThink PR is good to go otherwise",
"@lhoestq @albertvillanova - how can we make an exception for the AMI README so that the test doesn't fail? The dataset card definitely should have a data preprocessing section",
"Hi @patrickvonplaten ,\r\n\r\nthe data preprocessing section is not defined as a valid section in the readme validation file. After this line:\r\nhttps://github.com/huggingface/datasets/blob/568db594d51110da9e23d224abded2a976b3c8c7/src/datasets/utils/resources/readme_structure.yaml#L20\r\nfeel free to insert (correctly indented of course):\r\n```python\r\n- name: \"Dataset Preprocessing\"\r\n allow_empty: true\r\n allow_empty_text: true\r\n subsections: null\r\n```\r\nand then the tests should pass.",
"Thanks a lot @albertvillanova - I've added the feature to all audio datasets and corrected the task of `covost2`"
] | 1,634,217,225,000 | 1,634,302,323,000 | 1,634,300,553,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3081",
"html_url": "https://github.com/huggingface/datasets/pull/3081",
"diff_url": "https://github.com/huggingface/datasets/pull/3081.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3081.patch",
"merged_at": 1634300553000
} | This PR adds the new `Audio(...)` features - see: https://github.com/huggingface/datasets/pull/2324 to the most important audio datasets:
- Librispeech
- Timit
- Common Voice
- AMI
- ... (others I'm forgetting now)
The PR is curently blocked because the following leads to a problem:
```python
from datasets import load_dataset
# load first time works
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
# load from cache breaks
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
```
As soon as it's unblocked, I'll adapt the other audio datasets as well. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3081/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3080/comments | https://api.github.com/repos/huggingface/datasets/issues/3080/events | https://github.com/huggingface/datasets/issues/3080 | 1,026,380,626 | I_kwDODunzps49LVNS | 3,080 | Error related to timeout keyword argument | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,634,217,058,000 | 1,634,222,391,000 | 1,634,222,391,000 | MEMBER | null | null | null | ## Describe the bug
As reported by @patrickvonplaten, a TypeError is raised when trying to load a dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
```
## Actual results
```
TypeError: dataset_info() got an unexpected keyword argument 'timeout'
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3080/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3077/comments | https://api.github.com/repos/huggingface/datasets/issues/3077/events | https://github.com/huggingface/datasets/pull/3077 | 1,026,150,362 | PR_kwDODunzps4tMFPG | 3,077 | Fix loading a metric with internal import | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,202,418,000 | 1,634,202,896,000 | 1,634,202,895,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3077",
"html_url": "https://github.com/huggingface/datasets/pull/3077",
"diff_url": "https://github.com/huggingface/datasets/pull/3077.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3077.patch",
"merged_at": 1634202895000
} | After refactoring the module factory (#2986), a bug was introduced when loading metrics with internal imports.
This PR adds a new test case and fixes this bug.
Fix #3076.
CC: @sgugger @merveenoyan | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3077/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3077/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3076/comments | https://api.github.com/repos/huggingface/datasets/issues/3076/events | https://github.com/huggingface/datasets/issues/3076 | 1,026,113,484 | I_kwDODunzps49KT_M | 3,076 | Error when loading a metric | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,634,200,167,000 | 1,634,202,895,000 | 1,634,202,895,000 | MEMBER | null | null | null | ## Describe the bug
As reported by @sgugger, after last release, exception is thrown when loading a metric.
## Steps to reproduce the bug
```python
from datasets import load_metric
metric = load_metric("squad_v2")
```
## Actual results
```
FileNotFoundError Traceback (most recent call last)
<ipython-input-1-e612a8cab787> in <module>
1 from datasets import load_metric
----> 2 metric = load_metric("squad_v2")
d:\projects\huggingface\datasets\src\datasets\load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, script_version, **metric_init_kwargs)
1336 )
1337 revision = script_version
-> 1338 metric_module = metric_module_factory(
1339 path, revision=revision, download_config=download_config, download_mode=download_mode
1340 ).module_path
d:\projects\huggingface\datasets\src\datasets\load.py in metric_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, **download_kwargs)
1237 if not isinstance(e1, FileNotFoundError):
1238 raise e1 from None
-> 1239 raise FileNotFoundError(
1240 f"Couldn't find a metric script at {relative_to_absolute_path(combined_path)}. "
1241 f"Metric '{path}' doesn't exist on the Hugging Face Hub either."
FileNotFoundError: Couldn't find a metric script at D:\projects\huggingface\datasets\squad_v2\squad_v2.py. Metric 'squad_v2' doesn't exist on the Hugging Face Hub either.
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3076/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3075/comments | https://api.github.com/repos/huggingface/datasets/issues/3075/events | https://github.com/huggingface/datasets/pull/3075 | 1,026,103,388 | PR_kwDODunzps4tL75E | 3,075 | Updates LexGLUE and MultiEURLEX README.md files | {
"login": "iliaschalkidis",
"id": 1626984,
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iliaschalkidis",
"html_url": "https://github.com/iliaschalkidis",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,199,556,000 | 1,634,552,020,000 | 1,634,552,020,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3075",
"html_url": "https://github.com/huggingface/datasets/pull/3075",
"diff_url": "https://github.com/huggingface/datasets/pull/3075.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3075.patch",
"merged_at": 1634552020000
} | Updates LexGLUE and MultiEURLEX README.md files
- Fix leaderboard in LexGLUE.
- Fix an error in the CaseHOLD data example.
- Turn MultiEURLEX dataset statistics table into HTML to nicely render in HF website. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3075/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3074/comments | https://api.github.com/repos/huggingface/datasets/issues/3074/events | https://github.com/huggingface/datasets/pull/3074 | 1,025,940,085 | PR_kwDODunzps4tLbe- | 3,074 | add XCSR dataset | {
"login": "yangxqiao",
"id": 42788901,
"node_id": "MDQ6VXNlcjQyNzg4OTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/42788901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yangxqiao",
"html_url": "https://github.com/yangxqiao",
"followers_url": "https://api.github.com/users/yangxqiao/followers",
"following_url": "https://api.github.com/users/yangxqiao/following{/other_user}",
"gists_url": "https://api.github.com/users/yangxqiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yangxqiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangxqiao/subscriptions",
"organizations_url": "https://api.github.com/users/yangxqiao/orgs",
"repos_url": "https://api.github.com/users/yangxqiao/repos",
"events_url": "https://api.github.com/users/yangxqiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/yangxqiao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> Hi ! Thanks for adding this dataset :)\r\n> \r\n> Do you know how the translations were done ? Maybe we can mention that in the dataset card.\r\n> \r\n> The rest looks all good to me :) good job with the dataset script and the dataset card !\r\n> \r\n> Just one thing: we try to have dummy_data.zip files that are as small as possible, however here each zip file is 70KB+. It think we can make them even smaller if we remove unnecessary files in them. In particular in the `ar` dummy data zip file, we don't need the data for all languages, but rather only the `ar` files. Could you try to remove the unnecessary files in the dummy data zip files ?\r\n\r\nHi! \r\n\r\nThank you so much for reviewing this PR. I've updated the README to briefly mention the translations and added a link to the paper, where a detailed description of the translation procedure can be found in the appendix.\r\n\r\nFor the dummy_data.zip files, is it possible to keep all the current files? I tried to remove some of the files, but the removal led to a failure in the local testing. We also think it may be better to keep the current dummy_data.zip files because all the data are useful actually. Thanks a lot!!",
"Hi @lhoestq, just a gentle ping on this PR. :D "
] | 1,634,186,399,000 | 1,636,379,556,000 | 1,636,379,556,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3074",
"html_url": "https://github.com/huggingface/datasets/pull/3074",
"diff_url": "https://github.com/huggingface/datasets/pull/3074.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3074.patch",
"merged_at": 1636379556000
} | Hi,
I wanted to add the [XCSR ](https://inklab.usc.edu//XCSR/xcsr_datasets) dataset to huggingface! :)
I followed the instructions of adding new dataset to huggingface and have all the required files ready now! It would be super helpful if you can take a look and review them. Thanks in advance for your time and help. Look forward to hearing from you and can't wait to add XCSR to huggingface :D | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3074/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3073/comments | https://api.github.com/repos/huggingface/datasets/issues/3073/events | https://github.com/huggingface/datasets/issues/3073 | 1,025,718,469 | I_kwDODunzps49IzjF | 3,073 | Import error installing with ppc64le | {
"login": "gcervantes8",
"id": 21228908,
"node_id": "MDQ6VXNlcjIxMjI4OTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/21228908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gcervantes8",
"html_url": "https://github.com/gcervantes8",
"followers_url": "https://api.github.com/users/gcervantes8/followers",
"following_url": "https://api.github.com/users/gcervantes8/following{/other_user}",
"gists_url": "https://api.github.com/users/gcervantes8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gcervantes8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gcervantes8/subscriptions",
"organizations_url": "https://api.github.com/users/gcervantes8/orgs",
"repos_url": "https://api.github.com/users/gcervantes8/repos",
"events_url": "https://api.github.com/users/gcervantes8/events{/privacy}",
"received_events_url": "https://api.github.com/users/gcervantes8/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"This seems to be an issue with importing PyArrow so I posted the problem [here](https://issues.apache.org/jira/browse/ARROW-14323), and I'm closing this issue.\r\n"
] | 1,634,161,043,000 | 1,634,229,346,000 | 1,634,229,208,000 | NONE | null | null | null | ## Describe the bug
Installing the datasets library with a computer running with ppc64le seems to cause an issue when importing the datasets library.
```
python
Python 3.6.13 | packaged by conda-forge | (default, Sep 23 2021, 07:37:44)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import datasets
Illegal instruction (core dumped)
```
Error when importing
`Illegal instruction (core dumped)`
## Steps to reproduce the bug
I get this error when installing the library by using conda. I can't install with pip I believe because pyarrow only has the ppc64le library on conda forge
```
conda create --name transformers_py36_v2 python=3.6
conda activate transformers_py36_v2
conda install datasets
```
## Tracebacks
conda create --name transformers_py36_v2 python=3.6
```
Collecting package metadata (current_repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.9.2
latest version: 4.10.3
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /p/home/gerryc/.conda/envs/transformers_py36_v2
added / updated specs:
- python=3.6
The following NEW packages will be INSTALLED:
_libgcc_mutex conda-forge/linux-ppc64le::_libgcc_mutex-0.1-conda_forge
_openmp_mutex conda-forge/linux-ppc64le::_openmp_mutex-4.5-1_gnu
ca-certificates conda-forge/linux-ppc64le::ca-certificates-2021.10.8-h1084571_0
certifi pkgs/main/linux-ppc64le::certifi-2020.12.5-py36h6ffa863_0
ld_impl_linux-ppc~ conda-forge/linux-ppc64le::ld_impl_linux-ppc64le-2.36.1-ha35d02b_2
libffi conda-forge/linux-ppc64le::libffi-3.4.2-h3b9df90_4
libgcc-ng conda-forge/linux-ppc64le::libgcc-ng-11.2.0-h7698a5e_11
libgomp conda-forge/linux-ppc64le::libgomp-11.2.0-h7698a5e_11
libstdcxx-ng conda-forge/linux-ppc64le::libstdcxx-ng-11.2.0-habdf983_11
libzlib conda-forge/linux-ppc64le::libzlib-1.2.11-h339bb43_1013
ncurses conda-forge/linux-ppc64le::ncurses-6.2-hea85c5d_4
openssl conda-forge/linux-ppc64le::openssl-1.1.1l-h4e0d66e_0
pip conda-forge/noarch::pip-21.3-pyhd8ed1ab_0
python conda-forge/linux-ppc64le::python-3.6.13-h57873ef_2_cpython
readline conda-forge/linux-ppc64le::readline-8.1-h5c45dff_0
setuptools pkgs/main/linux-ppc64le::setuptools-58.0.4-py36h6ffa863_0
sqlite conda-forge/linux-ppc64le::sqlite-3.36.0-h4e2196e_2
tk conda-forge/linux-ppc64le::tk-8.6.11-h41c6715_1
wheel conda-forge/noarch::wheel-0.37.0-pyhd8ed1ab_1
xz conda-forge/linux-ppc64le::xz-5.2.5-h6eb9509_1
zlib conda-forge/linux-ppc64le::zlib-1.2.11-h339bb43_1013
Proceed ([y]/n)? y
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
# $ conda activate transformers_py36_v2
#
# To deactivate an active environment, use
#
# $ conda deactivate
```
conda activate transformers_py36_v2
conda install datasets
```
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.9.2
latest version: 4.10.3
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /p/home/gerryc/.conda/envs/transformers_py36_v2
added / updated specs:
- datasets
The following NEW packages will be INSTALLED:
abseil-cpp conda-forge/linux-ppc64le::abseil-cpp-20210324.2-h3b9df90_0
aiohttp conda-forge/linux-ppc64le::aiohttp-3.7.4.post0-py36hc33305d_0
arrow-cpp conda-forge/linux-ppc64le::arrow-cpp-5.0.0-py36hf9cf308_8_cpu
async-timeout conda-forge/noarch::async-timeout-3.0.1-py_1000
attrs conda-forge/noarch::attrs-21.2.0-pyhd8ed1ab_0
aws-c-cal conda-forge/linux-ppc64le::aws-c-cal-0.5.11-hb3fac3d_0
aws-c-common conda-forge/linux-ppc64le::aws-c-common-0.6.2-h4e0d66e_0
aws-c-event-stream conda-forge/linux-ppc64le::aws-c-event-stream-0.2.7-h76da5f2_13
aws-c-io conda-forge/linux-ppc64le::aws-c-io-0.10.5-hf6a6c7c_0
aws-checksums conda-forge/linux-ppc64le::aws-checksums-0.1.11-hfe76d68_7
aws-sdk-cpp conda-forge/linux-ppc64le::aws-sdk-cpp-1.8.186-h90855e8_3
brotlipy conda-forge/linux-ppc64le::brotlipy-0.7.0-py36hc33305d_1001
bzip2 conda-forge/linux-ppc64le::bzip2-1.0.8-h4e0d66e_4
c-ares conda-forge/linux-ppc64le::c-ares-1.17.2-h4e0d66e_0
cffi conda-forge/linux-ppc64le::cffi-1.14.6-py36h021ab3c_1
chardet conda-forge/linux-ppc64le::chardet-4.0.0-py36h270354c_1
colorama conda-forge/noarch::colorama-0.4.4-pyh9f0ad1d_0
cryptography conda-forge/linux-ppc64le::cryptography-3.4.7-py36hc71b123_0
dataclasses conda-forge/noarch::dataclasses-0.8-pyh787bdff_2
datasets conda-forge/noarch::datasets-1.12.1-pyhd8ed1ab_1
dill conda-forge/noarch::dill-0.3.4-pyhd8ed1ab_0
filelock conda-forge/noarch::filelock-3.3.0-pyhd8ed1ab_0
fsspec conda-forge/noarch::fsspec-2021.10.0-pyhd8ed1ab_0
gflags conda-forge/linux-ppc64le::gflags-2.2.2-hb209c28_1004
glog conda-forge/linux-ppc64le::glog-0.5.0-h4040248_0
grpc-cpp conda-forge/linux-ppc64le::grpc-cpp-1.40.0-h2bf711c_2
huggingface_hub conda-forge/noarch::huggingface_hub-0.0.19-pyhd8ed1ab_0
idna conda-forge/noarch::idna-2.10-pyh9f0ad1d_0
idna_ssl conda-forge/noarch::idna_ssl-1.0.0-0
importlib-metadata conda-forge/linux-ppc64le::importlib-metadata-4.8.1-py36h270354c_0
importlib_metadata conda-forge/noarch::importlib_metadata-4.8.1-hd8ed1ab_0
krb5 conda-forge/linux-ppc64le::krb5-1.19.2-haf43566_2
libblas conda-forge/linux-ppc64le::libblas-3.9.0-11_linuxppc64le_openblas
libbrotlicommon conda-forge/linux-ppc64le::libbrotlicommon-1.0.9-h4e0d66e_5
libbrotlidec conda-forge/linux-ppc64le::libbrotlidec-1.0.9-h4e0d66e_5
libbrotlienc conda-forge/linux-ppc64le::libbrotlienc-1.0.9-h4e0d66e_5
libcblas conda-forge/linux-ppc64le::libcblas-3.9.0-11_linuxppc64le_openblas
libcurl conda-forge/linux-ppc64le::libcurl-7.79.1-he415e40_1
libedit conda-forge/linux-ppc64le::libedit-3.1.20191231-h41a240f_2
libev conda-forge/linux-ppc64le::libev-4.33-h6eb9509_1
libevent conda-forge/linux-ppc64le::libevent-2.1.10-h97db324_4
libgfortran-ng conda-forge/linux-ppc64le::libgfortran-ng-11.2.0-hfdc3801_11
libgfortran5 conda-forge/linux-ppc64le::libgfortran5-11.2.0-he58fbb4_11
liblapack conda-forge/linux-ppc64le::liblapack-3.9.0-11_linuxppc64le_openblas
libnghttp2 conda-forge/linux-ppc64le::libnghttp2-1.43.0-h42039ad_1
libopenblas conda-forge/linux-ppc64le::libopenblas-0.3.17-pthreads_h486567c_1
libprotobuf conda-forge/linux-ppc64le::libprotobuf-3.18.1-h690f14c_0
libssh2 conda-forge/linux-ppc64le::libssh2-1.10.0-ha5a9321_2
libthrift conda-forge/linux-ppc64le::libthrift-0.15.0-h54f692e_1
libutf8proc conda-forge/linux-ppc64le::libutf8proc-2.6.1-h4e0d66e_0
lz4-c conda-forge/linux-ppc64le::lz4-c-1.9.3-h3b9df90_1
multidict conda-forge/linux-ppc64le::multidict-5.2.0-py36hc33305d_0
multiprocess conda-forge/linux-ppc64le::multiprocess-0.70.12.2-py36hc33305d_0
numpy conda-forge/linux-ppc64le::numpy-1.19.5-py36h86665d4_1
orc conda-forge/linux-ppc64le::orc-1.7.0-hae6b4bd_0
packaging conda-forge/noarch::packaging-21.0-pyhd8ed1ab_0
pandas conda-forge/linux-ppc64le::pandas-1.1.5-py36hab1a6e6_0
parquet-cpp conda-forge/noarch::parquet-cpp-1.5.1-2
pyarrow conda-forge/linux-ppc64le::pyarrow-5.0.0-py36h7a46c7e_8_cpu
pycparser conda-forge/noarch::pycparser-2.20-pyh9f0ad1d_2
pyopenssl conda-forge/noarch::pyopenssl-21.0.0-pyhd8ed1ab_0
pyparsing conda-forge/noarch::pyparsing-2.4.7-pyh9f0ad1d_0
pysocks conda-forge/linux-ppc64le::pysocks-1.7.1-py36h270354c_3
python-dateutil conda-forge/noarch::python-dateutil-2.8.2-pyhd8ed1ab_0
python-xxhash conda-forge/linux-ppc64le::python-xxhash-2.0.2-py36hc33305d_0
python_abi conda-forge/linux-ppc64le::python_abi-3.6-2_cp36m
pytz conda-forge/noarch::pytz-2021.3-pyhd8ed1ab_0
pyyaml conda-forge/linux-ppc64le::pyyaml-5.4.1-py36hc33305d_1
re2 conda-forge/linux-ppc64le::re2-2021.09.01-h3b9df90_0
requests conda-forge/noarch::requests-2.25.1-pyhd3deb0d_0
s2n conda-forge/linux-ppc64le::s2n-1.0.10-h97db324_0
six conda-forge/noarch::six-1.16.0-pyh6c4a22f_0
snappy conda-forge/linux-ppc64le::snappy-1.1.8-hb209c28_3
tqdm conda-forge/noarch::tqdm-4.62.3-pyhd8ed1ab_0
typing-extensions conda-forge/noarch::typing-extensions-3.10.0.2-hd8ed1ab_0
typing_extensions conda-forge/noarch::typing_extensions-3.10.0.2-pyha770c72_0
urllib3 conda-forge/noarch::urllib3-1.26.7-pyhd8ed1ab_0
xxhash conda-forge/linux-ppc64le::xxhash-0.8.0-h4e0d66e_3
yaml conda-forge/linux-ppc64le::yaml-0.2.5-h6eb9509_0
yarl conda-forge/linux-ppc64le::yarl-1.6.3-py36hc33305d_2
zipp conda-forge/noarch::zipp-3.6.0-pyhd8ed1ab_0
zstd conda-forge/linux-ppc64le::zstd-1.5.0-h65c4b1a_0
The following packages will be UPDATED:
certifi pkgs/main::certifi-2020.12.5-py36h6ff~ --> conda-forge::certifi-2021.5.30-py36h270354c_0
Proceed ([y]/n)? y
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Red Hat Enterprise Linux 8.2 (Ootpa)
- Python version: 3.6
- PyArrow version: pyarrow - 5.0.0 - py36h7a46c7e_8_cpu - conda-forge
Any help would be appreciated! I've been struggling on installing datasets on this machine.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3073/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3072/comments | https://api.github.com/repos/huggingface/datasets/issues/3072/events | https://github.com/huggingface/datasets/pull/3072 | 1,025,233,152 | PR_kwDODunzps4tJNnD | 3,072 | Fix pathlib patches for streaming | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,130,675,000 | 1,634,131,865,000 | 1,634,131,865,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3072",
"html_url": "https://github.com/huggingface/datasets/pull/3072",
"diff_url": "https://github.com/huggingface/datasets/pull/3072.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3072.patch",
"merged_at": 1634131865000
} | Fix issue https://github.com/huggingface/datasets/issues/2866 (for good this time)
`counter` now works in both streaming and non-streaming mode.
And the `AttributeError: 'str' object has no attribute 'as_posix'` related to the patch of Path.open is fixed as well
Note : the patches should only affect the datasets module, not the user's ones ! That's why we should probably use something else than patch.object to patch the Path class' methods.
cc @severo @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3072/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3071/comments | https://api.github.com/repos/huggingface/datasets/issues/3071/events | https://github.com/huggingface/datasets/issues/3071 | 1,024,893,493 | I_kwDODunzps49FqI1 | 3,071 | Custom plain text dataset, plain json dataset and plain csv dataset are remove from datasets template folder | {
"login": "zixiliuUSC",
"id": 49173327,
"node_id": "MDQ6VXNlcjQ5MTczMzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/49173327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zixiliuUSC",
"html_url": "https://github.com/zixiliuUSC",
"followers_url": "https://api.github.com/users/zixiliuUSC/followers",
"following_url": "https://api.github.com/users/zixiliuUSC/following{/other_user}",
"gists_url": "https://api.github.com/users/zixiliuUSC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zixiliuUSC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zixiliuUSC/subscriptions",
"organizations_url": "https://api.github.com/users/zixiliuUSC/orgs",
"repos_url": "https://api.github.com/users/zixiliuUSC/repos",
"events_url": "https://api.github.com/users/zixiliuUSC/events{/privacy}",
"received_events_url": "https://api.github.com/users/zixiliuUSC/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @zixiliuUSC, \r\n\r\nAs explained in the documentation (https://huggingface.co/docs/datasets/loading.html#json), we support loading any dataset in JSON (as well as CSV, text, Parquet) format:\r\n```python\r\nds = load_dataset('json', data_files='my_file.json')\r\n```"
] | 1,634,110,330,000 | 1,634,113,624,000 | 1,634,113,623,000 | NONE | null | null | null | ## Adding a Dataset
- **Name:** text, json, csv
- **Description:** I am developing a customized dataset loading script. The problem is mainly about my custom dataset is seperate into many files and I only find a dataset loading template in [https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py](https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py) that can handle my circumstance. I'm afraid these templates are too old to use. Could you re-add these three templates to current master branch?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3071/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3070 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3070/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3070/comments | https://api.github.com/repos/huggingface/datasets/issues/3070/events | https://github.com/huggingface/datasets/pull/3070 | 1,024,856,745 | PR_kwDODunzps4tIBRp | 3,070 | Fix Windows CI with FileNotFoundError when stting up s3_base fixture | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks ! Sorry for the inconvenience ^^' "
] | 1,634,107,741,000 | 1,634,115,313,000 | 1,634,107,788,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3070",
"html_url": "https://github.com/huggingface/datasets/pull/3070",
"diff_url": "https://github.com/huggingface/datasets/pull/3070.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3070.patch",
"merged_at": 1634107788000
} | Fix #3069. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3070/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3069 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3069/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3069/comments | https://api.github.com/repos/huggingface/datasets/issues/3069/events | https://github.com/huggingface/datasets/issues/3069 | 1,024,818,680 | I_kwDODunzps49FX34 | 3,069 | CI fails on Windows with FileNotFoundError when stting up s3_base fixture | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,634,104,346,000 | 1,634,112,349,000 | 1,634,107,788,000 | MEMBER | null | null | null | ## Describe the bug
After commit 9353fc863d0c99ab0427f83cc5a4f04fcf52f1df, the CI fails on Windows with FileNotFoundError when stting up s3_base fixture. See: https://app.circleci.com/pipelines/github/huggingface/datasets/8151/workflows/5db8d154-badd-4d3d-b202-ca7a318997a2/jobs/50321
Error summary:
```
ERROR tests/test_arrow_dataset.py::test_dummy_dataset_serialize_s3 - FileNotF...
ERROR tests/test_dataset_dict.py::test_dummy_dataset_serialize_s3 - FileNotFo...
```
Stack trace:
```
______________ ERROR at setup of test_dummy_dataset_serialize_s3 ______________
[gw0] win32 -- Python 3.6.8 C:\tools\miniconda3\python.exe
@pytest.fixture()
def s3_base():
# writable local S3 system
import shlex
import subprocess
# Mocked AWS Credentials for moto.
old_environ = os.environ.copy()
os.environ.update(S3_FAKE_ENV_VARS)
> proc = subprocess.Popen(shlex.split("moto_server s3 -p %s" % s3_port))
tests\s3_fixtures.py:32:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\tools\miniconda3\lib\subprocess.py:729: in __init__
restore_signals, start_new_session)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <subprocess.Popen object at 0x0000012BB8A4B908>
args = 'moto_server s3 -p 5555', executable = None, preexec_fn = None
close_fds = True, pass_fds = (), cwd = None, env = None
startupinfo = <subprocess.STARTUPINFO object at 0x0000012BB8177630>
creationflags = 0, shell = False, p2cread = -1, p2cwrite = -1, c2pread = -1
c2pwrite = -1, errread = -1, errwrite = -1, unused_restore_signals = True
unused_start_new_session = False
def _execute_child(self, args, executable, preexec_fn, close_fds,
pass_fds, cwd, env,
startupinfo, creationflags, shell,
p2cread, p2cwrite,
c2pread, c2pwrite,
errread, errwrite,
unused_restore_signals, unused_start_new_session):
"""Execute program (MS Windows version)"""
assert not pass_fds, "pass_fds not supported on Windows."
if not isinstance(args, str):
args = list2cmdline(args)
# Process startup details
if startupinfo is None:
startupinfo = STARTUPINFO()
if -1 not in (p2cread, c2pwrite, errwrite):
startupinfo.dwFlags |= _winapi.STARTF_USESTDHANDLES
startupinfo.hStdInput = p2cread
startupinfo.hStdOutput = c2pwrite
startupinfo.hStdError = errwrite
if shell:
startupinfo.dwFlags |= _winapi.STARTF_USESHOWWINDOW
startupinfo.wShowWindow = _winapi.SW_HIDE
comspec = os.environ.get("COMSPEC", "cmd.exe")
args = '{} /c "{}"'.format (comspec, args)
# Start the process
try:
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
# no special security
None, None,
int(not close_fds),
creationflags,
env,
os.fspath(cwd) if cwd is not None else None,
> startupinfo)
E FileNotFoundError: [WinError 2] The system cannot find the file specified
C:\tools\miniconda3\lib\subprocess.py:1017: FileNotFoundError
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3069/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3068/comments | https://api.github.com/repos/huggingface/datasets/issues/3068/events | https://github.com/huggingface/datasets/pull/3068 | 1,024,681,264 | PR_kwDODunzps4tHhOC | 3,068 | feat: increase streaming retry config | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq I had 2 runs for more than 2 days each, continuously streaming (they were failing before with 3 retries at 1 sec interval).\r\n\r\nThey are running on TPU's (so great internet connection) and only had connection errors a few times each (3 & 4). Each time it worked after only 1 retry.\r\nThe reason for a higher number of retries is for local connections. It would allow for almost 2mn of a wifi/ethernet disconnection. In practice this should not happen very often.\r\n\r\nLet me know if you think it's too much."
] | 1,634,090,450,000 | 1,634,117,156,000 | 1,634,117,154,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3068",
"html_url": "https://github.com/huggingface/datasets/pull/3068",
"diff_url": "https://github.com/huggingface/datasets/pull/3068.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3068.patch",
"merged_at": 1634117154000
} | Increase streaming config parameters:
* retry interval set to 5 seconds
* max retries set to 20 (so 1mn 40s) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3068/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3067/comments | https://api.github.com/repos/huggingface/datasets/issues/3067/events | https://github.com/huggingface/datasets/pull/3067 | 1,024,023,185 | PR_kwDODunzps4tFSCy | 3,067 | add story_cloze | {
"login": "zaidalyafeai",
"id": 15667714,
"node_id": "MDQ6VXNlcjE1NjY3NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zaidalyafeai",
"html_url": "https://github.com/zaidalyafeai",
"followers_url": "https://api.github.com/users/zaidalyafeai/followers",
"following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}",
"gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions",
"organizations_url": "https://api.github.com/users/zaidalyafeai/orgs",
"repos_url": "https://api.github.com/users/zaidalyafeai/repos",
"events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}",
"received_events_url": "https://api.github.com/users/zaidalyafeai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for pushing this dataset :)\r\n\r\nAccording to the CI, the file `cloze_test_val__spring2016 - cloze_test_ALL_val.csv` is missing in the dummy data zip file (the zip files seem empty). Feel free to add this file with 4-5 lines and it should be good\r\n\r\nAnd you can fix the YAML tags with\r\n```yaml\r\npretty_name: Story Cloze Test\r\n```\r\nand filling the other tags task_categories and task_ids\r\n\r\nIf the dataset doesn exist on paperswithcode, you can just leave\r\n```yaml\r\npaperswithcode_id: null\r\n```",
"@lhoestq can't fix the last test fails.",
"> Thanks @zaidalyafeai, the failing test is due to an issue in the master branch, that has already been fixed.\r\n> \r\n> You can include the fix:\r\n> \r\n> ```\r\n> git checkout add_story_cloze\r\n> git fetch upstream master\r\n> git merge upstream/master\r\n> ```\r\n\r\nThanks @albertvillanova, passed all the tests now. ",
"Thanks Albert, I fixed the suggested comments. This dataset has no train splits, it is only used for evaluation."
] | 1,634,056,613,000 | 1,634,132,893,000 | 1,634,132,893,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3067",
"html_url": "https://github.com/huggingface/datasets/pull/3067",
"diff_url": "https://github.com/huggingface/datasets/pull/3067.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3067.patch",
"merged_at": 1634132893000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3067/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3066/comments | https://api.github.com/repos/huggingface/datasets/issues/3066/events | https://github.com/huggingface/datasets/pull/3066 | 1,024,005,311 | PR_kwDODunzps4tFObl | 3,066 | Add iter_archive | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,055,436,000 | 1,634,548,367,000 | 1,634,548,366,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3066",
"html_url": "https://github.com/huggingface/datasets/pull/3066",
"diff_url": "https://github.com/huggingface/datasets/pull/3066.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3066.patch",
"merged_at": 1634548366000
} | Added the `iter_archive` method for the StreamingDownloadManager.
It was already implemented in the regular DownloadManager.
Now it can be used to stream from TAR archives as mentioned in https://github.com/huggingface/datasets/issues/2829
I also updated the `food101` dataset as an example.
Any image/audio dataset using TAR archives can be updated to use `iter_archive` in order to be streamable :)
cc @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3066/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3065 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3065/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3065/comments | https://api.github.com/repos/huggingface/datasets/issues/3065/events | https://github.com/huggingface/datasets/pull/3065 | 1,023,951,322 | PR_kwDODunzps4tFDjk | 3,065 | Fix test command after refac | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,052,210,000 | 1,634,052,527,000 | 1,634,052,526,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3065",
"html_url": "https://github.com/huggingface/datasets/pull/3065",
"diff_url": "https://github.com/huggingface/datasets/pull/3065.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3065.patch",
"merged_at": 1634052526000
} | Fix the `datasets-cli` test command after the `prepare_module` change in #2986 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3065/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3064 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3064/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3064/comments | https://api.github.com/repos/huggingface/datasets/issues/3064/events | https://github.com/huggingface/datasets/issues/3064 | 1,023,900,075 | I_kwDODunzps49B3mr | 3,064 | Make `interleave_datasets` more robust | {
"login": "sbmaruf",
"id": 32699797,
"node_id": "MDQ6VXNlcjMyNjk5Nzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/32699797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sbmaruf",
"html_url": "https://github.com/sbmaruf",
"followers_url": "https://api.github.com/users/sbmaruf/followers",
"following_url": "https://api.github.com/users/sbmaruf/following{/other_user}",
"gists_url": "https://api.github.com/users/sbmaruf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sbmaruf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sbmaruf/subscriptions",
"organizations_url": "https://api.github.com/users/sbmaruf/orgs",
"repos_url": "https://api.github.com/users/sbmaruf/repos",
"events_url": "https://api.github.com/users/sbmaruf/events{/privacy}",
"received_events_url": "https://api.github.com/users/sbmaruf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,634,049,293,000 | 1,634,049,565,000 | null | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
Right now there are few hiccups using `interleave_datasets`. Interleaved dataset iterates until the smallest dataset completes it's iterator. In this way larger datasets may not complete full epoch of iteration.
It creates new problems in calculation of epoch since there are no way to track which dataset from `interleave_datasets` completes how many epoch.
**Describe the solution you'd like**
For `interleave_datasets` module,
- [ ] Add a boolean argument `--stop-iter` in `interleave_datasets` that enables dataset to either iterate infinite amount of time or not. That means it should not return `StopIterator` exception in case `--stop-iter=False`.
- [ ] Internal list variable `iter_cnt` that explains how many times (in steps/epochs) each dataset iterates at a given point.
- [ ] Add an argument `--max-iter` (list type) that explain maximum amount of time each of the dataset can iterate. After complete `--max-iter` of one dataset, other dataset should continue sampling and when all the dataset finish their respective `--max-iter`, only then return `StopIterator`
Note: I'm new to `datasets` api. May be these features are already there in the datasets.
Since multitask training is the latest trends, I believe this feature would make the `datasets` api more popular.
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3064/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3063/comments | https://api.github.com/repos/huggingface/datasets/issues/3063/events | https://github.com/huggingface/datasets/issues/3063 | 1,023,588,297 | I_kwDODunzps49ArfJ | 3,063 | Windows CI is unable to test streaming properly because of SSL issues | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [
"I think this problem is already fixed:\r\n```python\r\nIn [4]: import fsspec\r\n ...:\r\n ...: url = \"https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes\"\r\n ...:\r\n ...: fsspec.open(url).open()\r\nOut[4]: <File-like object HTTPFileSystem, https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattribu\r\n```",
"No I'm still having this issue on my windows, and so does the CI"
] | 1,634,031,220,000 | 1,634,663,512,000 | null | MEMBER | null | null | null | In https://github.com/huggingface/datasets/pull/3041 the windows tests were skipped because of SSL issues with moon-staging.huggingface.co:443
The issue appears only on windows with asyncio. On Linux it works. With requests it works as well. And with the production environment huggingface.co it also works.
to reproduce on windows:
```python
import fsspec
# use any URL to a file in a dataset repo
url = "https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes"
fsspec.open(url).open()
```
raises
```python
FileNotFoundError: https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes
```
because of
```python
aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host moon-staging.huggingface.co:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')]
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3063/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3062 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3062/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3062/comments | https://api.github.com/repos/huggingface/datasets/issues/3062/events | https://github.com/huggingface/datasets/pull/3062 | 1,023,209,592 | PR_kwDODunzps4tCxfK | 3,062 | Update summary on PyPi beyond NLP | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,994,866,000 | 1,634,115,354,000 | 1,634,115,354,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3062",
"html_url": "https://github.com/huggingface/datasets/pull/3062",
"diff_url": "https://github.com/huggingface/datasets/pull/3062.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3062.patch",
"merged_at": 1634115353000
} | More than just NLP now | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3062/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3062/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3061/comments | https://api.github.com/repos/huggingface/datasets/issues/3061/events | https://github.com/huggingface/datasets/issues/3061 | 1,023,103,119 | I_kwDODunzps48-1CP | 3,061 | Feature request : add leave=True to dataset.map to enable tqdm nested bars (and whilst we're at it couldn't we get a way to access directly tqdm underneath?) | {
"login": "BenoitDalFerro",
"id": 69694610,
"node_id": "MDQ6VXNlcjY5Njk0NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenoitDalFerro",
"html_url": "https://github.com/BenoitDalFerro",
"followers_url": "https://api.github.com/users/BenoitDalFerro/followers",
"following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}",
"gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions",
"organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs",
"repos_url": "https://api.github.com/users/BenoitDalFerro/repos",
"events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"@lhoestq, @albertvillanova can we have `**tqdm_kwargs` in `map`? If there are any fields that are important to our tqdm (like iterable or unit), we can pop them before initialising the tqdm object so as to avoid duplicity.",
"Hi ! Sounds like a good idea :)\r\n\r\nAlso I think it would be better to have this as an actual parameters instead of kwargs to make it clearer"
] | 1,633,985,389,000 | 1,634,895,250,000 | null | NONE | null | null | null | **A clear and concise description of what you want to happen.**
It would be so nice to be able to nest HuggingFace `Datasets.map() ` progress bars in the grander scheme of things and whilst we're at it why not other functions.
**Describe alternatives you've considered**
By the way is there not a way to directly interact with underlying tqdm module ? **kwargs-ish?
**Additional context**
Furthering tqdm integration #2374 and huggingface/transformers#11797 solutioned by huggingface/transformers#12226 provided with tqdm description as `desc=`
@sgugger @bhavitvyamalik | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3061/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3060/comments | https://api.github.com/repos/huggingface/datasets/issues/3060/events | https://github.com/huggingface/datasets/issues/3060 | 1,022,936,396 | I_kwDODunzps48-MVM | 3,060 | load_dataset('openwebtext') yields "Compressed file ended before the end-of-stream marker was reached" | {
"login": "RylanSchaeffer",
"id": 8942987,
"node_id": "MDQ6VXNlcjg5NDI5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8942987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RylanSchaeffer",
"html_url": "https://github.com/RylanSchaeffer",
"followers_url": "https://api.github.com/users/RylanSchaeffer/followers",
"following_url": "https://api.github.com/users/RylanSchaeffer/following{/other_user}",
"gists_url": "https://api.github.com/users/RylanSchaeffer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RylanSchaeffer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RylanSchaeffer/subscriptions",
"organizations_url": "https://api.github.com/users/RylanSchaeffer/orgs",
"repos_url": "https://api.github.com/users/RylanSchaeffer/repos",
"events_url": "https://api.github.com/users/RylanSchaeffer/events{/privacy}",
"received_events_url": "https://api.github.com/users/RylanSchaeffer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @RylanSchaeffer, thanks for reporting.\r\n\r\nI'm sorry, but I was not able to reproduce your problem.\r\n\r\nNormally, the reason for this type of error is that, during your download of the data files, this was not fully complete.\r\n\r\nCould you please try to load the dataset again but forcing its redownload? Please use:\r\n```python\r\ndataset = load_dataset(\"openwebtext\", download_mode=\"FORCE_REDOWNLOAD\")\r\n```\r\n\r\nLet me know if the problem persists.",
"I close this issue for the moment. Feel free to re-open it again if the problem persists."
] | 1,633,971,927,000 | 1,635,400,341,000 | 1,635,400,341,000 | NONE | null | null | null | ## Describe the bug
When I try `load_dataset('openwebtext')`, I receive a "EOFError: Compressed file ended before the end-of-stream marker was reached" error.
## Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset('openwebtext')
```
## Expected results
I expect the `dataset` variable to be properly constructed.
## Actual results
```
File "/home/rschaef/CoCoSci-Language-Distillation/distillation_v2/ratchet_learning/tasks/base.py", line 37, in create_dataset
dataset_str,
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/load.py", line 1117, in load_dataset
use_auth_token=use_auth_token,
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/builder.py", line 637, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/builder.py", line 704, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/rschaef/.cache/huggingface/modules/datasets_modules/datasets/openwebtext/85b3ae7051d2d72e7c5fdf6dfb462603aaa26e9ed506202bf3a24d261c6c40a1/openwebtext.py", line 61, in _split_generators
dl_dir = dl_manager.download_and_extract(_URL)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 261, in extract
partial(cached_path, download_config=download_config), path_or_paths, num_proc=num_proc, disable_tqdm=False
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 197, in map_nested
return function(data_struct)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 316, in cached_path
output_path, force_extract=download_config.force_extract
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 40, in extract
self.extractor.extract(input_path, output_path, extractor=extractor)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 179, in extract
return extractor.extract(input_path, output_path)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 53, in extract
tar_file.extractall(output_path)
File "/usr/lib/python3.6/tarfile.py", line 2010, in extractall
numeric_owner=numeric_owner)
File "/usr/lib/python3.6/tarfile.py", line 2052, in extract
numeric_owner=numeric_owner)
File "/usr/lib/python3.6/tarfile.py", line 2122, in _extract_member
self.makefile(tarinfo, targetpath)
File "/usr/lib/python3.6/tarfile.py", line 2171, in makefile
copyfileobj(source, target, tarinfo.size, ReadError, bufsize)
File "/usr/lib/python3.6/tarfile.py", line 249, in copyfileobj
buf = src.read(bufsize)
File "/usr/lib/python3.6/lzma.py", line 200, in read
return self._buffer.read(size)
File "/usr/lib/python3.6/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "/usr/lib/python3.6/_compression.py", line 99, in read
raise EOFError("Compressed file ended before the "
python-BaseException
EOFError: Compressed file ended before the end-of-stream marker was reached
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-4.4.0-173-generic-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.10
- PyArrow version: 5.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3060/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3059/comments | https://api.github.com/repos/huggingface/datasets/issues/3059/events | https://github.com/huggingface/datasets/pull/3059 | 1,022,620,057 | PR_kwDODunzps4tA54w | 3,059 | Fix task reloading from cache | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,953,784,000 | 1,633,955,019,000 | 1,633,955,019,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3059",
"html_url": "https://github.com/huggingface/datasets/pull/3059",
"diff_url": "https://github.com/huggingface/datasets/pull/3059.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3059.patch",
"merged_at": 1633955018000
} | When reloading a dataset from the cache when doing `map`, the tasks templates were kept instead of being updated regarding the output of the `map` function. This is an issue because we drop the tasks templates that are not compatible anymore after `map`, for example if a column of the template was removed.
This PR fixes this and for convenience introduces a decorator `@transmit_tasks` that takes care of doing this verification, similar to the `@transmit_format` decorator.
This should fix issue https://github.com/huggingface/datasets/issues/3047 cc @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3059/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3059/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3058/comments | https://api.github.com/repos/huggingface/datasets/issues/3058/events | https://github.com/huggingface/datasets/issues/3058 | 1,022,612,664 | I_kwDODunzps4889S4 | 3,058 | Dataset wikipedia and Bookcorpusopen cannot be fetched from dataloader. | {
"login": "hobbitlzy",
"id": 35392624,
"node_id": "MDQ6VXNlcjM1MzkyNjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/35392624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hobbitlzy",
"html_url": "https://github.com/hobbitlzy",
"followers_url": "https://api.github.com/users/hobbitlzy/followers",
"following_url": "https://api.github.com/users/hobbitlzy/following{/other_user}",
"gists_url": "https://api.github.com/users/hobbitlzy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hobbitlzy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hobbitlzy/subscriptions",
"organizations_url": "https://api.github.com/users/hobbitlzy/orgs",
"repos_url": "https://api.github.com/users/hobbitlzy/repos",
"events_url": "https://api.github.com/users/hobbitlzy/events{/privacy}",
"received_events_url": "https://api.github.com/users/hobbitlzy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! I think this issue is more related to the `transformers` project. Could you open an issue on https://github.com/huggingface/transformers ?\r\n\r\nAnyway I think the issue could be that both wikipedia and bookcorpusopen have an additional \"title\" column, contrary to wikitext which only has a \"text\" column. After calling `load_dataset`, can you try doing `dataset = dataset.remove_columns(\"title\")` ?",
"Removing the \"title\" column works! Thanks for your advice.\r\n\r\nMaybe I should still create an issue to `transformers' to mark this solution?"
] | 1,633,953,299,000 | 1,634,182,083,000 | null | NONE | null | null | null | ## Describe the bug
I have used the previous version of `transformers` and `datasets`. The dataset `wikipedia` can be successfully used. Recently, I upgrade them to the newest version and find it raises errors. I also tried other datasets. The `wikitext` works and the `bookcorpusopen` raises the same errors as `wikipedia`.
## Steps to reproduce the bug
Run the `run_mlm_no_trainer.py` and the given script on this [link](https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling). Change the dataset from wikitext to wikipedia or bookcorpusopen. BTW, the library transformers is of version 4.11.3.
## Expected results
The data batchs are fetched from the data loader and train.
## Actual results
The first time to fetch data batch occurs error.
`Traceback (most recent call last):
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 705, in convert_to_tensors
tensor = as_tensor(value)
ValueError: too many dimensions 'str'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "src/original_run_mlm_no_trainer.py", line 528, in <module>
main()
File "src/original_run_mlm_no_trainer.py", line 488, in main
for step, batch in enumerate(train_dataloader):
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/accelerate/data_loader.py", line 303, in __iter__
for batch in super().__iter__():
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
data = self._next_data()
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 557, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/data/data_collator.py", line 41, in __call__
return self.torch_call(features)
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/data/data_collator.py", line 671, in torch_call
batch = self.tokenizer.pad(examples, return_tensors="pt", pad_to_multiple_of=self.pad_to_multiple_of)
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2774, in pad
return BatchEncoding(batch_outputs, tensor_type=return_tensors)
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 210, in __init__
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 722, in convert_to_tensors
"Unable to create tensor, you should probably activate truncation and/or padding "
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Linux-5.8.0-59-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.6
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3058/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3057/comments | https://api.github.com/repos/huggingface/datasets/issues/3057/events | https://github.com/huggingface/datasets/issues/3057 | 1,022,508,315 | I_kwDODunzps488j0b | 3,057 | Error in per class precision computation | {
"login": "tidhamecha2",
"id": 38906722,
"node_id": "MDQ6VXNlcjM4OTA2NzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/38906722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tidhamecha2",
"html_url": "https://github.com/tidhamecha2",
"followers_url": "https://api.github.com/users/tidhamecha2/followers",
"following_url": "https://api.github.com/users/tidhamecha2/following{/other_user}",
"gists_url": "https://api.github.com/users/tidhamecha2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tidhamecha2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tidhamecha2/subscriptions",
"organizations_url": "https://api.github.com/users/tidhamecha2/orgs",
"repos_url": "https://api.github.com/users/tidhamecha2/repos",
"events_url": "https://api.github.com/users/tidhamecha2/events{/privacy}",
"received_events_url": "https://api.github.com/users/tidhamecha2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @tidhamecha2, thanks for reporting.\r\n\r\nIndeed, we fixed this issue just one week ago: #3008\r\n\r\nThe fix will be included in our next version release.\r\n\r\nIn the meantime, you can incorporate the fix by installing `datasets` from the master branch:\r\n```\r\npip install -U git+ssh://git@github.com/huggingface/datasets.git@master#egg=datasest\r\n```\r\nor\r\n```\r\npip install -U git+https://github.com/huggingface/datasets.git@master#egg=datasets\r\n```"
] | 1,633,946,719,000 | 1,633,947,464,000 | 1,633,947,376,000 | NONE | null | null | null | ## Describe the bug
When trying to get the per class precision values by providing `average=None`, following error is thrown `ValueError: can only convert an array of size 1 to a Python scalar`
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
precision_metric = load_metric("precision")
predictions = [0, 2, 1, 0, 0, 1]
references = [0, 1, 2, 0, 1, 2]
results = precision_metric.compute(predictions=predictions, references=references, average=None)
```
## Expected results
` {'precision': array([0.66666667, 0. , 0. ])}`
as per https://github.com/huggingface/datasets/blob/master/metrics/precision/precision.py
## Actual results
```
output = self._compute(predictions=predictions, references=references, **kwargs)
File "~/.cache/huggingface/modules/datasets_modules/metrics/precision/94709a71c6fe37171ef49d3466fec24dee9a79846c9f176dff66a649e9811690/precision.py", line 110, in _compute
sample_weight=sample_weight,
ValueError: can only convert an array of size 1 to a Python scalar
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: linux
- Python version: 3.6.9
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3057/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3056/comments | https://api.github.com/repos/huggingface/datasets/issues/3056/events | https://github.com/huggingface/datasets/pull/3056 | 1,022,345,564 | PR_kwDODunzps4tAB9h | 3,056 | Fix meteor metric for version >= 3.6.4 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,936,304,000 | 1,633,937,360,000 | 1,633,937,359,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3056",
"html_url": "https://github.com/huggingface/datasets/pull/3056",
"diff_url": "https://github.com/huggingface/datasets/pull/3056.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3056.patch",
"merged_at": 1633937359000
} | After `nltk` update, the meteor metric expects pre-tokenized inputs (breaking change).
This PR fixes this issue, while maintaining compatibility with older versions. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3056/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3055/comments | https://api.github.com/repos/huggingface/datasets/issues/3055/events | https://github.com/huggingface/datasets/issues/3055 | 1,022,319,238 | I_kwDODunzps4871qG | 3,055 | CI test suite fails after meteor metric update | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,633,934,232,000 | 1,633,937,431,000 | 1,633,937,431,000 | MEMBER | null | null | null | ## Describe the bug
CI test suite fails: https://app.circleci.com/pipelines/github/huggingface/datasets/8110/workflows/f059ba43-9154-4632-bebb-82318447ddc9/jobs/50010
Stack trace:
```
___________________ LocalMetricTest.test_load_metric_meteor ____________________
[gw1] linux -- Python 3.6.15 /home/circleci/.pyenv/versions/3.6.15/bin/python3.6
self = <tests.test_metric_common.LocalMetricTest testMethod=test_load_metric_meteor>
metric_name = 'meteor'
def test_load_metric(self, metric_name):
doctest.ELLIPSIS_MARKER = "[...]"
metric_module = importlib.import_module(datasets.load.prepare_module(os.path.join("metrics", metric_name))[0])
metric = datasets.load.import_main_class(metric_module.__name__, dataset=False)
# check parameters
parameters = inspect.signature(metric._compute).parameters
self.assertTrue("predictions" in parameters)
self.assertTrue("references" in parameters)
self.assertTrue(all([p.kind != p.VAR_KEYWORD for p in parameters.values()])) # no **kwargs
# run doctest
with self.patch_intensive_calls(metric_name, metric_module.__name__):
with self.use_local_metrics():
> results = doctest.testmod(metric_module, verbose=True, raise_on_error=True)
tests/test_metric_common.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1951: in testmod
runner.run(test)
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1839: in run
r = DocTestRunner.run(self, test, compileflags, out, False)
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1476: in run
return self.__run(test, compileflags, out)
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1382: in __run
exception)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <doctest.DebugRunner object at 0x7f4c26bd3da0>
out = <built-in method write of _io.TextIOWrapper object at 0x7f51a21852d0>
test = <DocTest datasets_modules.datasets.meteor.6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7.meteor.Mete...ets_modules/datasets/meteor/6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7/meteor.py:87 (5 examples)>
example = <doctest.Example object at 0x7f4c26bd3eb8>
exc_info = (<class 'TypeError'>, TypeError('"hypothesis" expects pre-tokenized hypothesis (Iterable[str]): It is a guide to action which ensures that the military always obeys the commands of the party',), <traceback object at 0x7f4cd01afec8>)
def report_unexpected_exception(self, out, test, example, exc_info):
> raise UnexpectedException(test, example, exc_info)
E doctest.UnexpectedException: <DocTest datasets_modules.datasets.meteor.6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7.meteor.Meteor from /tmp/pytest-of-circleci/pytest-0/popen-gw1/cache/modules/datasets_modules/datasets/meteor/6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7/meteor.py:87 (5 examples)>
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1845: UnexpectedException
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3055/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3054/comments | https://api.github.com/repos/huggingface/datasets/issues/3054/events | https://github.com/huggingface/datasets/pull/3054 | 1,022,108,186 | PR_kwDODunzps4s_TmE | 3,054 | Update Biosses | {
"login": "bwang482",
"id": 6764450,
"node_id": "MDQ6VXNlcjY3NjQ0NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bwang482",
"html_url": "https://github.com/bwang482",
"followers_url": "https://api.github.com/users/bwang482/followers",
"following_url": "https://api.github.com/users/bwang482/following{/other_user}",
"gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bwang482/subscriptions",
"organizations_url": "https://api.github.com/users/bwang482/orgs",
"repos_url": "https://api.github.com/users/bwang482/repos",
"events_url": "https://api.github.com/users/bwang482/events{/privacy}",
"received_events_url": "https://api.github.com/users/bwang482/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,904,712,000 | 1,634,115,867,000 | 1,634,115,867,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3054",
"html_url": "https://github.com/huggingface/datasets/pull/3054",
"diff_url": "https://github.com/huggingface/datasets/pull/3054.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3054.patch",
"merged_at": 1634115867000
} | Fix variable naming | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3054/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3053 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3053/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3053/comments | https://api.github.com/repos/huggingface/datasets/issues/3053/events | https://github.com/huggingface/datasets/issues/3053 | 1,022,076,905 | I_kwDODunzps4866fp | 3,053 | load_dataset('the_pile_openwebtext2') produces ArrowInvalid, value too large to fit in C integer type | {
"login": "davidbau",
"id": 3458792,
"node_id": "MDQ6VXNlcjM0NTg3OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3458792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidbau",
"html_url": "https://github.com/davidbau",
"followers_url": "https://api.github.com/users/davidbau/followers",
"following_url": "https://api.github.com/users/davidbau/following{/other_user}",
"gists_url": "https://api.github.com/users/davidbau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidbau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidbau/subscriptions",
"organizations_url": "https://api.github.com/users/davidbau/orgs",
"repos_url": "https://api.github.com/users/davidbau/repos",
"events_url": "https://api.github.com/users/davidbau/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidbau/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"I encountered the same bug using different datasets.\r\nany suggestions?",
"+1, can reproduce here!"
] | 1,633,895,721,000 | 1,636,408,118,000 | null | NONE | null | null | null | ## Describe the bug
When loading `the_pile_openwebtext2`, we get the error `pyarrow.lib.ArrowInvalid: Value 2111 too large to fit in C integer type`
## Steps to reproduce the bug
```python
import datasets
ds = datasets.load_dataset('the_pile_openwebtext2')
```
## Expected results
Should download the dataset, convert it to an arrow file, and return a working Dataset object.
## Actual results
The download works, but conversion to the arrow file fails as follows:
```
>>> ds = datasets.load_dataset('the_pile_openwebtext2')
Downloading and preparing dataset openwebtext2/plain_text (download: 27.33 GiB, generated: 63.86 GiB
, post-processed: Unknown size, total: 91.19 GiB) to /home/davidbau/.cache/huggingface/datasets/open
webtext2/plain_text/1.0.0/c48ec73ba3483bac673463f48f67e9a4fd8cb49a9d6ec4fb957f0b424b97cf25...
Traceback (most recent call last):
File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/builder.py", line 1133,
in _prepare_split
writer.write(example, key)
File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/arrow_writer.py", line
366, in write
self.write_examples_on_file()
File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/arrow_writer.py", line
311, in write_examples_on_file
pa_array = pa.array(typed_sequence)
File "pyarrow/array.pxi", line 222, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/arrow_writer.py", line
115, in __arrow_array__
out = pa.array(cast_to_python_objects(self.data, only_1d_for_numpy=True), type=type)
File "pyarrow/array.pxi", line 305, in pyarrow.lib.array
File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Value 2111 too large to fit in C integer type
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
```
- Platform: Ubuntu 20.04
- Python version: python 3.9
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3053/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3052 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3052/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3052/comments | https://api.github.com/repos/huggingface/datasets/issues/3052/events | https://github.com/huggingface/datasets/issues/3052 | 1,021,944,435 | I_kwDODunzps486aJz | 3,052 | load_dataset cannot download the data and hangs on forever if cache dir specified | {
"login": "BenoitDalFerro",
"id": 69694610,
"node_id": "MDQ6VXNlcjY5Njk0NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenoitDalFerro",
"html_url": "https://github.com/BenoitDalFerro",
"followers_url": "https://api.github.com/users/BenoitDalFerro/followers",
"following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}",
"gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions",
"organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs",
"repos_url": "https://api.github.com/users/BenoitDalFerro/repos",
"events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Issue was environment inconsistency, updating packages did the trick\r\n\r\n`conda install -c huggingface -c conda-forge datasets`\r\n\r\n> Collecting package metadata (current_repodata.json): done\r\n> Solving environment: |\r\n> The environment is inconsistent, please check the package plan carefully\r\n> The following packages are causing the inconsistency:\r\n> \r\n> - conda-forge/noarch::datasets==1.12.1=pyhd8ed1ab_1\r\n> - conda-forge/win-64::multiprocess==0.70.12.2=py38h294d835_0\r\n> done\r\n> \r\n> Package Plan\r\n> \r\n> environment location: C:\\xxx\\anaconda3\\envs\\UnBias-94-1\r\n> \r\n> added / updated specs:\r\n> - datasets\r\n> \r\n> \r\n> The following NEW packages will be INSTALLED:\r\n> \r\n> dill conda-forge/noarch::dill-0.3.4-pyhd8ed1ab_0\r\n> \r\n> The following packages will be UPDATED:\r\n> \r\n> ca-certificates pkgs/main::ca-certificates-2021.9.30-~ --> conda-forge::ca-certificates-2021.10.8-h5b45459_0\r\n> certifi pkgs/main::certifi-2021.5.30-py38haa9~ --> conda-forge::certifi-2021.10.8-py38haa244fe_0\r\n> \r\n> The following packages will be SUPERSEDED by a higher-priority channel:\r\n> "
] | 1,633,861,896,000 | 1,633,949,829,000 | 1,633,949,796,000 | NONE | null | null | null | ## Describe the bug
After updating datasets, a code that ran just fine for ages began to fail. Specifying _datasets.load_dataset_'s _cache_dir_ optional argument on Windows 10 machine results in data download to hang on forever. Same call without cache_dir works just fine. Surprisingly exact same code just runs perfectly fine on Linux docker instance running in cloud.
Unfortunately I updated Windows also at the same time and I can't remember which version of datasets was running in my conda environment prior to the update otherwise I would have tried both to check this out. :(
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
cache_dir = 'c:/data/datasets'
dataset = load_dataset('wikipedia', '20200501.en', split='train',cache_dir=cache_dir)
```
Note that exact same code without specifying _cache_dir_ argument works perfectly fine.
```
cache_dir = 'c:/data/datasets'
dataset = load_dataset('wikipedia', '20200501.en', split='train')
```
## Expected results
Downloads the dataset and cache is handled in the _cache_dir_ directory
## Actual results
Data download keeps hanging on forever, **NO TRACEBACK**!
## Environment info
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.8.11
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3052/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3051/comments | https://api.github.com/repos/huggingface/datasets/issues/3051/events | https://github.com/huggingface/datasets/issues/3051 | 1,021,852,234 | I_kwDODunzps486DpK | 3,051 | Non-Matching Checksum Error with crd3 dataset | {
"login": "RylanSchaeffer",
"id": 8942987,
"node_id": "MDQ6VXNlcjg5NDI5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8942987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RylanSchaeffer",
"html_url": "https://github.com/RylanSchaeffer",
"followers_url": "https://api.github.com/users/RylanSchaeffer/followers",
"following_url": "https://api.github.com/users/RylanSchaeffer/following{/other_user}",
"gists_url": "https://api.github.com/users/RylanSchaeffer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RylanSchaeffer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RylanSchaeffer/subscriptions",
"organizations_url": "https://api.github.com/users/RylanSchaeffer/orgs",
"repos_url": "https://api.github.com/users/RylanSchaeffer/repos",
"events_url": "https://api.github.com/users/RylanSchaeffer/events{/privacy}",
"received_events_url": "https://api.github.com/users/RylanSchaeffer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"I got the same error for another dataset (`multi_woz_v22`):\r\n\r\n```\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json']\r\n```",
"I'm seeing the same issue as @RylanSchaeffer:\r\nPython 3.7.11, macOs 11.4\r\ndatasets==1.14.0\r\n\r\nfails on:\r\n```python\r\ndataset = datasets.load_dataset(\"multi_woz_v22\")\r\n```"
] | 1,633,829,563,000 | 1,635,654,752,000 | null | NONE | null | null | null | ## Describe the bug
When I try loading the crd3 dataset (https://huggingface.co/datasets/crd3), an error is thrown.
## Steps to reproduce the bug
```python
dataset = load_dataset('crd3', split='train')
```
## Expected results
I expect no error to be thrown.
## Actual results
A non-matching checksum error is thrown.
```
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/RevanthRameshkumar/CRD3/archive/master.zip']
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-4.4.0-173-generic-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.10
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3051/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3050 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3050/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3050/comments | https://api.github.com/repos/huggingface/datasets/issues/3050/events | https://github.com/huggingface/datasets/pull/3050 | 1,021,772,622 | PR_kwDODunzps4s-anK | 3,050 | Fix streaming: catch Timeout error | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I'm running a large test.\r\nLet's see if I get any error within a few days.",
"This time it stopped after 8h but correctly raised `ConnectionError: Server Disconnected`.\r\n\r\nTraceback:\r\n```\r\nTraceback (most recent call last): \r\n File \"/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py\", line 1027, in <module> \r\n main() \r\n File \"/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py\", line 991, in main \r\n for batch in tqdm( \r\n File \"/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/tqdm/std.py\", line 1180, in __iter__ \r\n for obj in iterable: \r\n File \"/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py\", line 376, in data_loader_streaming\r\n for item in dataset:\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 341, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 338, in _iter\r\n yield from ex_iterable\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 179, in __iter__\r\n key_examples_list = [(key, example)] + [\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 179, in <listcomp>\r\n key_examples_list = [(key, example)] + [\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 176, in __iter__\r\n for key, example in iterator:\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 225, in __iter__\r\n for x in self.ex_iterable:\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 99, in __iter__\r\n for key, example in self.generate_examples_fn(**kwargs_with_shuffled_shards):\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 287, in wrapper\r\n for key, table in generate_tables_fn(**kwargs):\r\n File \"/home/koush/datasets/src/datasets/packaged_modules/json/json.py\", line 107, in _generate_tables\r\n batch = f.read(self.config.chunksize)\r\n File \"/home/koush/datasets/src/datasets/utils/streaming_download_manager.py\", line 136, in read_with_retries\r\n raise ConnectionError(\"Server Disconnected\")\r\nConnectionError: Server Disconnected\r\n```\r\n\r\nRight before this error, the warnings were correctly raised:\r\n\r\n```\r\n10/10/2021 06:02:26 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 1sec [1/3]\r\n10/10/2021 06:02:27 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 1sec [2/3] \r\n10/10/2021 06:02:28 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 1sec [3/3\r\n```\r\n\r\nI'm going to see what happens if I change the max retries to 20 and the interval to 5.",
"Also maybe we can raise the Server Disconnected error with more info about what kind of error caused it (client error, time out, etc.)",
"I have 2 runs:\r\n* [run 1](https://wandb.ai/dalle-mini/dalle-mini/runs/1nj161cl?workspace=user-borisd13) with [this data](https://huggingface.co/datasets/dalle-mini/encoded) that I will remove soon because I now use the 2nd one\r\n* [run 2](https://wandb.ai/dalle-mini/dalle-mini/runs/he9rrc3q?workspace=user-borisd13) with [this data](https://huggingface.co/datasets/dalle-mini/encoded-vqgan_imagenet_f16_16384)\r\n* `load_dataset(dataset_repo, data_files={'train':'data/train/*.jsonl', 'validation':'data/valid/*.jsonl'}, streaming=True)`\r\n\r\nThey have now been running by a bit more than a day for one run and 15h for the other.\r\n\r\nThe error logs are not shown in wandb because the script use `pylogging` (not sure why, I should change it) but basically so far with the new settings I had one timeout in each with successful reconnect afterwards.\r\n\r\nSo I think it's a good idea to have:\r\n* `STREAMING_READ_RETRY_INTERVAL = 5` since before my runs would get 3 errors in a row (with the default 1 second pause)\r\n* `STREAMING_READ_MAX_RETRIES` should also be increased. Since this type of error does not happen a lot, IΒ would still have a large number (at least 10) because a stopped training run may be a big issue if checkpointing/restart is not well implemented which is not always trivial",
"I agree ! Feel free to open a PR to increase both values"
] | 1,633,803,560,000 | 1,634,052,498,000 | 1,633,944,938,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3050",
"html_url": "https://github.com/huggingface/datasets/pull/3050",
"diff_url": "https://github.com/huggingface/datasets/pull/3050.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3050.patch",
"merged_at": 1633944938000
} | Catches Timeout error during streaming.
fix #3049 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3050/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3049/comments | https://api.github.com/repos/huggingface/datasets/issues/3049/events | https://github.com/huggingface/datasets/issues/3049 | 1,021,770,008 | I_kwDODunzps485vkY | 3,049 | TimeoutError during streaming | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,633,802,811,000 | 1,633,944,938,000 | 1,633,944,938,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
I got a TimeoutError after streaming for about 10h.
## Steps to reproduce the bug
Very long code but we could do a test of streaming indefinitely data, though error may take a while to appear.
## Expected results
This error was not expected in the code which considers only `ClientError` but not `TimeoutError`.
See [this line](https://github.com/huggingface/datasets/blob/2814fbd0e18150be409f10804670e98d9ecb87d4/src/datasets/utils/streaming_download_manager.py#L129).
Based on the traceback, it looks like the `TimeoutError` was not captured.
## Actual results
```
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 25, in _runner
result[0] = await coro
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/implementations/http.py", line 614, in async_fetch_range
out = await r.read()
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1032, in read
self._body = await self.content.read()
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 370, in read
block = await self.readany()
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 392, in readany
await self._wait("readany")
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 306, in _wait
await waiter
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/helpers.py", line 656, in __exit__
raise asyncio.TimeoutError from None
asyncio.exceptions.TimeoutError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 1027, in <module>
main()
File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 991, in main
for batch in tqdm(
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/tqdm/std.py", line 1180, in __iter__
for obj in iterable:
File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 376, in data_loader_streaming
for item in dataset:
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 341, in __iter__
for key, example in self._iter():
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 338, in _iter
yield from ex_iterable
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 179, in __iter__
key_examples_list = [(key, example)] + [
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 179, in <listcomp>
key_examples_list = [(key, example)] + [
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 176, in __iter__
for key, example in iterator:
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 225, in __iter__
for x in self.ex_iterable:
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 99, in __iter__
for key, example in self.generate_examples_fn(**kwargs_with_shuffled_shards):
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 287, in wrapper
for key, table in generate_tables_fn(**kwargs):
File "/home/koush/datasets/src/datasets/packaged_modules/json/json.py", line 107, in _generate_tables
batch = f.read(self.config.chunksize)
File "/home/koush/datasets/src/datasets/utils/streaming_download_manager.py", line 126, in read_with_retries
out = read(*args, **kwargs)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/implementations/http.py", line 572, in read
return super().read(length)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/spec.py", line 1533, in read
out = self.cache._fetch(self.loc, self.loc + length)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/caching.py", line 390, in _fetch
self.cache = self.fetcher(start, bend)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 91, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 69, in sync
raise FSTimeoutError from return_result
fsspec.exceptions.FSTimeoutError
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.2.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3049/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3048/comments | https://api.github.com/repos/huggingface/datasets/issues/3048/events | https://github.com/huggingface/datasets/issues/3048 | 1,021,765,661 | I_kwDODunzps485ugd | 3,048 | Identify which shard data belongs to | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Independently of this I think it raises the need to allow multiprocessing during streaming so that we get samples from multiple shards in one batch."
] | 1,633,801,595,000 | 1,633,811,057,000 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
I'm training on a large dataset made of multiple sub-datasets.
During training I can observe some jumps in loss which may correspond to different shards.
![image](https://user-images.githubusercontent.com/715491/136668758-521263aa-a9b2-4ad2-8d22-060b6bf86a1c.png)
My suspicion is that either:
* some of the sub-datasets are harder for the model than others
* some of the sub-datasets are not formatted properly
I'd like to identify which shards correspond to those jumps.
**Describe the solution you'd like**
It would be nice to have a key associated to each data sample or data batch containing details on where the data comes from (shard idx + item idx within the shard).
This should be supported both in local and streaming mode.
**Describe alternatives you've considered**
AΒ fix would be for me to add myself details (shard id, sample id) as part of each data sample.
The inconvenient is that it requires users to process/reupload every dataset when they need this feature. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3048/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3047/comments | https://api.github.com/repos/huggingface/datasets/issues/3047/events | https://github.com/huggingface/datasets/issues/3047 | 1,021,360,616 | I_kwDODunzps484Lno | 3,047 | Loading from cache a dataset for LM built from a text classification dataset sometimes errors | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"This has been fixed in 1.15, let me know if you still have this issue"
] | 1,633,717,391,000 | 1,635,959,588,000 | 1,635,959,588,000 | MEMBER | null | null | null | ## Describe the bug
Yes, I know, that description sucks. So the problem is arising in the course when we build a masked language modeling dataset using the IMDB dataset. To reproduce (or try since it's a bit fickle).
Create a dataset for masled-language modeling from the IMDB dataset.
```python
from datasets import load_dataset
from transformers import Autotokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased)
imdb_dataset = load_dataset("imdb", split="train")
def tokenize_function(examples):
return tokenizer(examples["text"])
tokenized_dataset = imdb_dataset.map(
tokenize_function, batched=True, remove_columns=["text", "label"]
)
chunk_size = 128
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
# Compute length of concatenated texts
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the last chunk if it's smaller than chunk_size
total_length = (total_length // chunk_size) * chunk_size
# Split by chunks of max_len.
result = {
k: [t[i : i + chunk_size] for i in range(0, total_length, chunk_size)]
for k, t in concatenated_examples.items()
}
# Create a new labels column
result["labels"] = result["input_ids"].copy()
return result
lm_dataset = tokenized_dataset.map(group_texts, batched=True)
```
Until now, all is well. The problem comes when you re-execute that code, more specifically:
```python
tokenized_dataset = imdb_dataset.map(
tokenize_function, batched=True, remove_columns=["text", "label"]
)
lm_dataset = tokenized_dataset.map(group_texts, batched=True)
```
Try several times if the bug doesn't appear instantly, or just each line at a time, ideally in a notebook/Colab and you should get at some point:
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-40-357a56ee3d53> in <module>
----> 1 lm_dataset = tokenized_dataset.map(group_texts, batched=True)
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1947 new_fingerprint=new_fingerprint,
1948 disable_tqdm=disable_tqdm,
-> 1949 desc=desc,
1950 )
1951 else:
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
424 }
425 # apply actual function
--> 426 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
427 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
428 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
404 # Call actual function
405
--> 406 out = func(self, *args, **kwargs)
407
408 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2138 if os.path.exists(cache_file_name) and load_from_cache_file:
2139 logger.warning("Loading cached processed dataset at %s", cache_file_name)
-> 2140 info = self.info.copy()
2141 info.features = features
2142 return Dataset.from_file(cache_file_name, info=info, split=self.split)
~/git/datasets/src/datasets/info.py in copy(self)
278
279 def copy(self) -> "DatasetInfo":
--> 280 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
281
282
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
177 for idx, template in enumerate(self.task_templates):
178 if isinstance(template, TextClassification):
--> 179 labels = self.features[template.label_column].names
180 self.task_templates[idx] = TextClassification(
181 text_column=template.text_column, label_column=template.label_column, labels=labels
KeyError: 'label'
```
It seems that when loading the cache, the dataset tries to access some kind of text classification template (which I imagine comes from the original dataset) and to look at a key that has since been removed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3047/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3046/comments | https://api.github.com/repos/huggingface/datasets/issues/3046/events | https://github.com/huggingface/datasets/pull/3046 | 1,021,021,368 | PR_kwDODunzps4s8MjS | 3,046 | Fix MedDialog metadata JSON | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,694,680,000 | 1,633,938,403,000 | 1,633,938,402,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3046",
"html_url": "https://github.com/huggingface/datasets/pull/3046",
"diff_url": "https://github.com/huggingface/datasets/pull/3046.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3046.patch",
"merged_at": 1633938402000
} | Fix #2969. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3046/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3045 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3045/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3045/comments | https://api.github.com/repos/huggingface/datasets/issues/3045/events | https://github.com/huggingface/datasets/pull/3045 | 1,020,968,704 | PR_kwDODunzps4s8B2b | 3,045 | Fix inconsistent caching behaviour in Dataset.map() with multiprocessing #3044 | {
"login": "vlievin",
"id": 9859840,
"node_id": "MDQ6VXNlcjk4NTk4NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9859840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vlievin",
"html_url": "https://github.com/vlievin",
"followers_url": "https://api.github.com/users/vlievin/followers",
"following_url": "https://api.github.com/users/vlievin/following{/other_user}",
"gists_url": "https://api.github.com/users/vlievin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vlievin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vlievin/subscriptions",
"organizations_url": "https://api.github.com/users/vlievin/orgs",
"repos_url": "https://api.github.com/users/vlievin/repos",
"events_url": "https://api.github.com/users/vlievin/events{/privacy}",
"received_events_url": "https://api.github.com/users/vlievin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for noticing this inconsistence and suggesting a fix :)\r\n\r\nIf I understand correctly you try to pass the same fingerprint to each processed shard of the dataset. This can be an issue since each shard is actually a different dataset with different data: they shouldn't have the same fingerprint.\r\n\r\nIdeally we want the result after `map` to have this fingerprint. The result after `map` is the concatenation of all the processed shards. In this case what we can do is add the `fingerprint` parameter to `concatenate_datasets` to overwrite the fingerprint here if needed:\r\nhttps://github.com/huggingface/datasets/blob/03b7f123cc17afc517c0aa2f912bbd90cb266185/src/datasets/arrow_dataset.py#L3588-L3590\r\n\r\nthen you can pass the fingerprint to `concatenate_datasets` here:\r\nhttps://github.com/huggingface/datasets/blob/03b7f123cc17afc517c0aa2f912bbd90cb266185/src/datasets/arrow_dataset.py#L2044-L2044",
"Hi @lhoestq, thanks for the pointers! Not having a unique fingerprint per shard was indeed was indeed a problem. \r\n\r\nLet me look into this. I'll be back with a fix soon.",
"Alright, to clarify about my problem. I using am using `datasets` with large datasets, and want to cache a heavy and non-deterministically fingerprintable function (using `datasets.fingerprint.Hasher`). Using `Dataset.map()` as it is would cause generating a random fingerprint. To circumvent this, I am generating custom deterministic fingerprints, which I pass as an argument to `Dataset.map()`. In that way, a deterministic fingerprint is set, and caching can be used. \r\n\r\nThis approach works well when using `num_proc==1`, but not so well when using `num_proc>1`. In both cases, `dataset._fingerprint` is effectively set to `new_fingerprint` at the end of the `.map()` call. However, caching is not used when `num_proc>1`, a non deterministically fingerprintable function and `new_fingerprint != null. The reason is that caching operates within `Dataset._map_single` and `new_fingerprint` is not passed here. \r\n\r\nThis pull request implements a quick fix (+unit test) by passing `new_fingerprint=f\"{new_fingerprint}-part{rank+1}-{num_proc}\"` to each `_map_single` call. Using a separate name for each call makes sure that each worker uses a different cache file (as you mentioned above).\r\n\r\nHowever, this solution still means that using a different value for `num_proc` will require computing new partial cache files. In the long run, performing the caching within `map()` instead of within `_map_single()` would be a cleaner solution.",
"Hi @vlievin,\r\n\r\nIf I understand your example correctly, you are trying to use the `new_fingerprint` param to have a deterministic fingerprint of the transform, which is not hashable due to randomness. Any particular reason why you are not using the `cache_file_name` param instead? I did run your example with the `cache_file_name` specified, and it behaves as expected based on the logs. Internally, `new_fingerprint` is needed to inject the calculated fingerprint into a method by the `fingerprint_transform` decorator, which is then used to compute the cache file name in `Dataset._get_cache_file_path` if the user hasn't specified one. ",
"Hi @lhoestq, I have cleaned up the unit test (incl. styling). It should be ready to merge as such. I am using this branch in my project and everything works fine. \r\n\r\nHi @mariosasko, the argument `new_fingerprint` allowed me to deterministically cache my transformation when using `num_proc=1`, so I assumed that was the right way to go. But maybe I have misinterpreted how `new_fingerprint` should be used.\r\n\r\nBut in any case, `map()` should perform consistently with regards to `num_proc`. In my opinion, the behaviour of `Dataset.map()` should perform the same, and this without requiring the user to input `cache_file_name` when `num_proc>1` is set.\r\nBut maybe there is a more elegant way to fix this using `cache_file_name` internally for each `_single_map()` call.\r\n\r\nSo, I think this is a more high level design decision and I will leave it to the maintainers :) ",
"Hi @vlievin,\r\n\r\nI appreciate your effort, but `new_fingerprint` behaves as described in the `Dataset.map` docs, and we don't have to follow some artificial consistency in regards to `num_proc`:\r\nhttps://github.com/huggingface/datasets/blob/adc5cec58dd15ee672016086fefdea34b3143e4f/src/datasets/arrow_dataset.py#L1962-L1963\r\n\r\nAdditionally, to compute the cache file name, you are using a private method (`dset._get_cache_file_path(new_fingerprint)`); prefixed with `_`), so this is a sign you may be doing something wrong because you are relying on the internals. I suggest you use cache_file_name instead and follow the suffix template docs, which explain how to compute file paths of the created cache files when `num_proc > 1`.",
"Hi @mariosasko, thanks for the pointer regarding the use of the private method in then unit tests. \r\n\r\nYes, `new_fingerprint` behaves as documented. If you don't think this is an issue, feel free to close this pull request. \r\n",
"Allowing the users to pass the fingerprint themselves for functions that can't be hashed would be a nice improvements. However I agree that as @mariosasko mentioned this is currently not how we want the API to behave for now - since it has to do with the internals of the library.\r\n\r\nThough we can discuss what could be the right way of doing it in https://github.com/huggingface/datasets/issues/3044 if you don't mind !"
] | 1,633,690,761,000 | 1,634,835,512,000 | 1,634,826,164,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3045",
"html_url": "https://github.com/huggingface/datasets/pull/3045",
"diff_url": "https://github.com/huggingface/datasets/pull/3045.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3045.patch",
"merged_at": null
} | Fix #3044
1. A rough unit test that fails without the fix. It probably doesn't comply with your code standards, but that just to draft the idea.
2. A one liner fix | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3045/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3044 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3044/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3044/comments | https://api.github.com/repos/huggingface/datasets/issues/3044/events | https://github.com/huggingface/datasets/issues/3044 | 1,020,869,778 | I_kwDODunzps482TyS | 3,044 | Inconsistent caching behaviour when using `Dataset.map()` with a `new_fingerprint` and `num_proc>1` | {
"login": "vlievin",
"id": 9859840,
"node_id": "MDQ6VXNlcjk4NTk4NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9859840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vlievin",
"html_url": "https://github.com/vlievin",
"followers_url": "https://api.github.com/users/vlievin/followers",
"following_url": "https://api.github.com/users/vlievin/following{/other_user}",
"gists_url": "https://api.github.com/users/vlievin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vlievin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vlievin/subscriptions",
"organizations_url": "https://api.github.com/users/vlievin/orgs",
"repos_url": "https://api.github.com/users/vlievin/repos",
"events_url": "https://api.github.com/users/vlievin/events{/privacy}",
"received_events_url": "https://api.github.com/users/vlievin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Following the discussion in #3045 if would be nice to have a way to let users have a nice experience with caching even if the function is not hashable.\r\n\r\nCurrently a workaround is to make the function picklable. This can be done by implementing a callable class instead, that can be pickled using by implementing a custom `__getstate__` method for example.\r\n\r\nHowever it sounds pretty complicated for a simple thing. Maybe one idea would be to have something similar to streamlit: they allow users to register the hashing of their own objects.\r\n\r\nSee the documentation about their `hash_funcs` here: https://docs.streamlit.io/library/advanced-features/caching#the-hash_funcs-parameter\r\n\r\nHere is the example they give:\r\n\r\n```python\r\nclass FileReference:\r\n def __init__(self, filename):\r\n self.filename = filename\r\n\r\ndef hash_file_reference(file_reference):\r\n filename = file_reference.filename\r\n return (filename, os.path.getmtime(filename))\r\n\r\n@st.cache(hash_funcs={FileReference: hash_file_reference})\r\ndef func(file_reference):\r\n ...\r\n```",
"My solution was to generate a custom hash, and use the hash as a `new_fingerprint` argument to the `map()` method to enable caching. This works, but is quite hacky.\r\n\r\n@lhoestq, this approach is very neat, this would make the whole caching mechanic more explicit. I don't have so much time to look into this right now, but I might give it a try in the future. "
] | 1,633,684,030,000 | 1,635,324,058,000 | null | NONE | null | null | null | ## Describe the bug
Caching does not work when using `Dataset.map()` with:
1. a function that cannot be deterministically fingerprinted
2. `num_proc>1`
3. using a custom fingerprint set with the argument `new_fingerprint`.
This means that the dataset will be mapped with the function for each and every call, which does not happen if `num_proc==1`. In that case (`num_proc==1`) subsequent calls will load the transformed dataset from the cache, which is the expected behaviour. The example can easily be translated into a unit test.
I have a fix and will submit a pull request asap.
## Steps to reproduce the bug
```python
import hashlib
import json
import os
from typing import Dict, Any
import numpy as np
from datasets import load_dataset, Dataset
Batch = Dict[str, Any]
filename = 'example.json'
class Transformation():
"""A transformation with a random state that cannot be fingerprinted"""
def __init__(self):
self.state = np.random.random()
def __call__(self, batch: Batch) -> Batch:
batch['x'] = [np.random.random() for _ in batch['x']]
return batch
def generate_dataset():
"""generate a simple dataset"""
rgn = np.random.RandomState(24)
data = {
'data': [{'x': float(y), 'y': -float(y)} for y in
rgn.random(size=(1000,))]}
if not os.path.exists(filename):
with open(filename, 'w') as f:
f.write(json.dumps(data))
return filename
def process_dataset_with_cache(num_proc=1, remove_cache=False,
cache_expected_to_exist=False):
# load the generated dataset
dset: Dataset = next(
iter(load_dataset('json', data_files=filename, field='data').values()))
new_fingerprint = hashlib.md5("static-id".encode("utf8")).hexdigest()
# get the expected cached path
cache_path = dset._get_cache_file_path(new_fingerprint)
if remove_cache and os.path.exists(cache_path):
os.remove(cache_path)
# check that the cache exists, and print a statement
# if was actually expected to exist
cache_exist = os.path.exists(cache_path)
print(f"> cache file exists={cache_exist}")
if cache_expected_to_exist and not cache_exist:
print("=== Cache does not exist! ====")
# apply the transformation with the new fingerprint
dset = dset.map(
Transformation(),
batched=True,
num_proc=num_proc,
new_fingerprint=new_fingerprint,
desc="mapping dataset with transformation")
generate_dataset()
for num_proc in [1, 2]:
print(f"# num_proc={num_proc}, first pass")
# first pass to generate the cache (always create a new cache here)
process_dataset_with_cache(remove_cache=True,
num_proc=num_proc,
cache_expected_to_exist=False)
print(f"# num_proc={num_proc}, second pass")
# second pass, expects the cache to exist
process_dataset_with_cache(remove_cache=False,
num_proc=num_proc,
cache_expected_to_exist=True)
os.remove(filename)
```
## Expected results
In the above python example, with `num_proc=2`, the **cache file should exist in the second call** of `process_dataset_with_cache` ("=== Cache does not exist! ====" should not be printed).
When the cache is successfully created, `map()` is called only one time.
## Actual results
In the above python example, with `num_proc=2`, the **cache does not exist in the second call** of `process_dataset_with_cache` (this results in printing "=== Cache does not exist! ====").
Because the cache doesn't exist, the `map()` method is executed a second time and the dataset is not loaded from the cache.
## Environment info
- `datasets` version: 1.12.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.8
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3044/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3043 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3043/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3043/comments | https://api.github.com/repos/huggingface/datasets/issues/3043/events | https://github.com/huggingface/datasets/issues/3043 | 1,020,252,114 | I_kwDODunzps48z8_S | 3,043 | Add PASS dataset | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [] | 1,633,625,023,000 | 1,633,625,023,000 | null | NONE | null | null | null | ## Adding a Dataset
- **Name:** PASS
- **Description:** An ImageNet replacement for self-supervised pretraining without humans
- **Data:** https://www.robots.ox.ac.uk/~vgg/research/pass/ https://github.com/yukimasano/PASS
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3043/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3042 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3042/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3042/comments | https://api.github.com/repos/huggingface/datasets/issues/3042/events | https://github.com/huggingface/datasets/pull/3042 | 1,020,047,289 | PR_kwDODunzps4s5Lxo | 3,042 | Improving elasticsearch integration | {
"login": "ggdupont",
"id": 5583410,
"node_id": "MDQ6VXNlcjU1ODM0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5583410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggdupont",
"html_url": "https://github.com/ggdupont",
"followers_url": "https://api.github.com/users/ggdupont/followers",
"following_url": "https://api.github.com/users/ggdupont/following{/other_user}",
"gists_url": "https://api.github.com/users/ggdupont/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ggdupont/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggdupont/subscriptions",
"organizations_url": "https://api.github.com/users/ggdupont/orgs",
"repos_url": "https://api.github.com/users/ggdupont/repos",
"events_url": "https://api.github.com/users/ggdupont/events{/privacy}",
"received_events_url": "https://api.github.com/users/ggdupont/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"@lhoestq @albertvillanova Iwas trying to fix the failing tests in circleCI but is there a test elasticsearch instance somewhere? If not, can I launch a docker container to have one?"
] | 1,633,613,315,000 | 1,634,660,259,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3042",
"html_url": "https://github.com/huggingface/datasets/pull/3042",
"diff_url": "https://github.com/huggingface/datasets/pull/3042.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3042.patch",
"merged_at": null
} | - adding murmurhash signature to sample in index
- adding optional credentials for remote elasticsearch server
- enabling sample update in index
- upgrade the elasticsearch 7.10.1 python client
- adding ElasticsearchBulider to instantiate a dataset from an index and a filtering query | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3042/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3041/comments | https://api.github.com/repos/huggingface/datasets/issues/3041/events | https://github.com/huggingface/datasets/pull/3041 | 1,018,911,385 | PR_kwDODunzps4s1ZAc | 3,041 | Load private data files + use glob on ZIP archives for json/csv/etc. module inference | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I have an error on windows:\r\n```python\r\naiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host moon-staging.huggingface.co:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')]\r\n```\r\nat the `fsspec` call in `xglob`:\r\n```python\r\nfs, *_ = fsspec.get_fs_token_paths(urlpath, storage_options=storage_options)\r\n```\r\n\r\nLooks like the windows CI has an SSL issue... ",
"I can reproduce it on my windows machine. On linux it works fine though",
"I'm just skipping the windows test for now",
"The Windows CI failure seems unrelated to this PR\r\n```python\r\nERROR tests/test_arrow_dataset.py::test_dummy_dataset_serialize_s3\r\n```"
] | 1,633,544,196,000 | 1,634,052,348,000 | 1,634,052,346,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3041",
"html_url": "https://github.com/huggingface/datasets/pull/3041",
"diff_url": "https://github.com/huggingface/datasets/pull/3041.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3041.patch",
"merged_at": 1634052346000
} | As mentioned in https://github.com/huggingface/datasets/issues/3032 loading data files from private repository isn't working correctly because of the data files resolved.
#2986 did a refactor of the data files resolver. I added authentication to it.
I also improved it to glob inside ZIP archives to look for json/csv/etc. files and infer which dataset builder (json/csv/etc.) to use.
Fix https://github.com/huggingface/datasets/issues/3032
Note that #2986 needs to get merged first | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3041/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3041/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3040 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3040/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3040/comments | https://api.github.com/repos/huggingface/datasets/issues/3040/events | https://github.com/huggingface/datasets/issues/3040 | 1,018,782,475 | I_kwDODunzps48uWML | 3,040 | [save_to_disk] Using `select()` followed by `save_to_disk` saves complete dataset making it hard to create dummy dataset | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi,\r\n\r\nthe `save_to_disk` docstring explains that `flatten_indices` has to be called on a dataset before saving it to save only the shard/slice of the dataset.",
"That works! Thansk!\r\n\r\nMight be worth doing that automatically actually in case the `save_to_disk` is called on a dataset that has an indices mapping :-)",
"I agree with @patrickvonplaten: this issue is reported recurrently, so better if we implement the `.flatten_indices()` automatically?",
"That would be great indeed - I don't really see a use case where one would not like to call `.flatten_indices()` before calling `save_to_disk`",
"+1 on this !"
] | 1,633,540,127,000 | 1,635,867,668,000 | 1,635,867,668,000 | MEMBER | null | null | null | ## Describe the bug
When only keeping a dummy size of a dataset (say the first 100 samples), and then saving it to disk to upload it in the following to the hub for easy demo/use - not just the small dataset is saved but the whole dataset with an indices file. The problem with this is that the dataset is still very big.
## Steps to reproduce the bug
E.g. run the following:
```python
from datasets import load_dataset, save_to_disk
nlp = load_dataset("glue", "mnli", split="train")
nlp.save_to_disk("full")
nlp = nlp.select(range(100))
nlp.save_to_disk("dummy")
```
Now one can see that both `"dummy"` and `"full"` have the same size. This shouldn't be the case IMO.
## Expected results
IMO `"dummy"` should be much smaller so that one can easily play around with the dataset on the hub.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.2.dev0
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3040/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3039/comments | https://api.github.com/repos/huggingface/datasets/issues/3039/events | https://github.com/huggingface/datasets/pull/3039 | 1,018,219,800 | PR_kwDODunzps4sy_J- | 3,039 | Add sberquad dataset | {
"login": "Alenush",
"id": 13781234,
"node_id": "MDQ6VXNlcjEzNzgxMjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/13781234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Alenush",
"html_url": "https://github.com/Alenush",
"followers_url": "https://api.github.com/users/Alenush/followers",
"following_url": "https://api.github.com/users/Alenush/following{/other_user}",
"gists_url": "https://api.github.com/users/Alenush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Alenush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Alenush/subscriptions",
"organizations_url": "https://api.github.com/users/Alenush/orgs",
"repos_url": "https://api.github.com/users/Alenush/repos",
"events_url": "https://api.github.com/users/Alenush/events{/privacy}",
"received_events_url": "https://api.github.com/users/Alenush/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,523,522,000 | 1,634,120,351,000 | 1,634,120,164,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3039",
"html_url": "https://github.com/huggingface/datasets/pull/3039",
"diff_url": "https://github.com/huggingface/datasets/pull/3039.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3039.patch",
"merged_at": 1634120164000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3039/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3038 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3038/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3038/comments | https://api.github.com/repos/huggingface/datasets/issues/3038/events | https://github.com/huggingface/datasets/pull/3038 | 1,018,113,499 | PR_kwDODunzps4syno_ | 3,038 | add sberquad dataset | {
"login": "Alenush",
"id": 13781234,
"node_id": "MDQ6VXNlcjEzNzgxMjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/13781234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Alenush",
"html_url": "https://github.com/Alenush",
"followers_url": "https://api.github.com/users/Alenush/followers",
"following_url": "https://api.github.com/users/Alenush/following{/other_user}",
"gists_url": "https://api.github.com/users/Alenush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Alenush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Alenush/subscriptions",
"organizations_url": "https://api.github.com/users/Alenush/orgs",
"repos_url": "https://api.github.com/users/Alenush/repos",
"events_url": "https://api.github.com/users/Alenush/events{/privacy}",
"received_events_url": "https://api.github.com/users/Alenush/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,520,019,000 | 1,633,521,481,000 | 1,633,521,481,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3038",
"html_url": "https://github.com/huggingface/datasets/pull/3038",
"diff_url": "https://github.com/huggingface/datasets/pull/3038.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3038.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3038/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3037 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3037/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3037/comments | https://api.github.com/repos/huggingface/datasets/issues/3037/events | https://github.com/huggingface/datasets/pull/3037 | 1,018,091,919 | PR_kwDODunzps4syi15 | 3,037 | SberQuad | {
"login": "Alenush",
"id": 13781234,
"node_id": "MDQ6VXNlcjEzNzgxMjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/13781234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Alenush",
"html_url": "https://github.com/Alenush",
"followers_url": "https://api.github.com/users/Alenush/followers",
"following_url": "https://api.github.com/users/Alenush/following{/other_user}",
"gists_url": "https://api.github.com/users/Alenush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Alenush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Alenush/subscriptions",
"organizations_url": "https://api.github.com/users/Alenush/orgs",
"repos_url": "https://api.github.com/users/Alenush/repos",
"events_url": "https://api.github.com/users/Alenush/events{/privacy}",
"received_events_url": "https://api.github.com/users/Alenush/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,519,268,000 | 1,633,519,988,000 | 1,633,519,988,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3037",
"html_url": "https://github.com/huggingface/datasets/pull/3037",
"diff_url": "https://github.com/huggingface/datasets/pull/3037.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3037.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3037/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3036 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3036/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3036/comments | https://api.github.com/repos/huggingface/datasets/issues/3036/events | https://github.com/huggingface/datasets/issues/3036 | 1,017,687,944 | I_kwDODunzps48qK-I | 3,036 | Protect master branch to force contributions via Pull Requests | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"It would be nice to protect the master from direct commits, but still having a way to merge our own PRs when no review is required (for example when updating a dataset_infos.json file, or minor bug fixes - things that happen quite often actually).\r\nDo you know if there's a way ?",
"you can if you're an admin of the repo",
"This is done. Now the master branch is protected:\r\n- [x] Require a pull request before merging: all commits must be made to a non-protected branch and submitted via a pull request\r\n - Required number of approvals before merging: 1 \r\n- [x] Require linear history: prevent merge commits from being pushed\r\n- [x] These requirements are not enforced for administrators\r\n- [x] Additionally, the master branch is also protected against deletion and force pushes\r\n\r\nCC: @lhoestq @julien-c @thomwolf "
] | 1,633,505,657,000 | 1,633,589,507,000 | 1,633,589,392,000 | MEMBER | null | null | null | In order to have a clearer Git history in the master branch, I propose to protect it so that all contributions must be done through a Pull Request and no direct commits to master are allowed.
- The Pull Request allows to give context, discuss any potential issues and improve the quality of the contribution
- The Pull Request will eventually be squashed and merged into master with a single commit that links to the Pull Request page (with all the context/discussions)
Note that we already implemented a protection in the master branch to avoid *merge* commits and ensure a linear history. This proposal goes one step further by avoiding all kind of direct commits and forcing contributions **only** through Pull Requests.
Please note that we can temporarily deactivate this protection if we need to make a direct commit, e.g. at each new version release.
The only way GitHub allows this kind or protection is by requiring a minimal number (at least one) of approvals of the Pull Request. The inconvenient is that the PR creator cannot approve their own PR: another person must approve it before it can be merged into master. To circumvent this, we could eventually disable this protection in the master branch when an urgent commit is needed (e.g. for a hotfix) and there is no other person available at that time to approve the PR.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3036/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3036/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3035/comments | https://api.github.com/repos/huggingface/datasets/issues/3035/events | https://github.com/huggingface/datasets/issues/3035 | 1,016,770,071 | I_kwDODunzps48mq4X | 3,035 | `load_dataset` does not work with uploaded arrow file | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! This is not a bug, this is simply not implemented.\r\n`save_to_disk` is for on-disk serialization and was not made compatible for the Hub.\r\nThat being said, I agree we actually should make it work with the Hub x)",
"cc @LysandreJik maybe we can solve this at the same time as adding `push_to_hub`"
] | 1,633,464,910,000 | 1,633,539,697,000 | null | MEMBER | null | null | null | ## Describe the bug
I've preprocessed and uploaded a dataset here: https://huggingface.co/datasets/ami-wav2vec2/ami_headset_single_preprocessed . The dataset is in `.arrow` format.
The dataset can correctly be loaded when doing:
```bash
git lfs install
git clone https://huggingface.co/datasets/ami-wav2vec2/ami_headset_single_preprocessed
```
followed by
```python
from datasets import load_from_disk
ds = load_from_disk("./ami_headset_single_preprocessed")
```
However when I try to directly download the dataset as follows:
```python
from datasets import load_dataset
ds = load_dataset("ami-wav2vec2/ami_headset_single_preprocessed")
```
the following error occurs:
```bash
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
1115 ignore_verifications=ignore_verifications,
1116 try_from_hf_gcs=try_from_hf_gcs,
-> 1117 use_auth_token=use_auth_token,
1118 )
1119
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
635 if not downloaded_from_gcs:
636 self._download_and_prepare(
--> 637 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
638 )
639 # Sync info
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
724 try:
725 # Prepare split will record examples associated to the split
--> 726 self._prepare_split(split_generator, **prepare_split_kwargs)
727 except OSError as e:
728 raise OSError(
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator)
1186 generator, unit=" tables", leave=False, disable=bool(logging.get_verbosity() == logging.NOTSET)
1187 ):
-> 1188 writer.write_table(table)
1189 num_examples, num_bytes = writer.finalize()
1190
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in write_table(self, pa_table, writer_batch_size)
424 # reorder the arrays if necessary + cast to self._schema
425 # we can't simply use .cast here because we may need to change the order of the columns
--> 426 pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
427 batches: List[pa.RecordBatch] = pa_table.to_batches(max_chunksize=writer_batch_size)
428 self._num_bytes += sum(batch.nbytes for batch in batches)
/usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays()
/usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib._sanitize_arrays()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray()
/usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib.ChunkedArray.cast()
/usr/local/lib/python3.7/dist-packages/pyarrow/compute.py in cast(arr, target_type, safe)
279 else:
280 options = CastOptions.unsafe(target_type)
--> 281 return call_function("cast", [arr], options)
282
283
/usr/local/lib/python3.7/dist-packages/pyarrow/_compute.pyx in pyarrow._compute.call_function()
/usr/local/lib/python3.7/dist-packages/pyarrow/_compute.pyx in pyarrow._compute.Function.call()
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowNotImplementedError: Unsupported cast from struct<train: struct<name: string, num_bytes: int64, num_examples: int64, dataset_name: string>, validation: struct<name: string, num_bytes: int64, num_examples: int64, dataset_name: string>, test: struct<name: string, num_bytes: int64, num_examples: int64, dataset_name: string>> to list using function cast_list
```
## Expected results
The dataset should be correctly loaded with `load_dataset` IMO.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.2.dev0
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3035/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3034 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3034/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3034/comments | https://api.github.com/repos/huggingface/datasets/issues/3034/events | https://github.com/huggingface/datasets/issues/3034 | 1,016,759,202 | I_kwDODunzps48moOi | 3,034 | Errors loading dataset using fs = a gcsfs.GCSFileSystem | {
"login": "dconatha",
"id": 74556552,
"node_id": "MDQ6VXNlcjc0NTU2NTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/74556552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dconatha",
"html_url": "https://github.com/dconatha",
"followers_url": "https://api.github.com/users/dconatha/followers",
"following_url": "https://api.github.com/users/dconatha/following{/other_user}",
"gists_url": "https://api.github.com/users/dconatha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dconatha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dconatha/subscriptions",
"organizations_url": "https://api.github.com/users/dconatha/orgs",
"repos_url": "https://api.github.com/users/dconatha/repos",
"events_url": "https://api.github.com/users/dconatha/events{/privacy}",
"received_events_url": "https://api.github.com/users/dconatha/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,633,464,428,000 | 1,633,465,599,000 | null | NONE | null | null | null | ## Describe the bug
Cannot load dataset using a `gcsfs.GCSFileSystem`. I'm not sure if this should be a bug in `gcsfs` or here...
Basically what seems to be happening is that since datasets saves datasets as folders and folders aren't "real objects" in gcs, gcsfs raises a 404 error. There are workarounds if you use gcsfs directly to download the file, but as is I can't get `load_from_disk` to work.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# load some dataset
dataset = load_dataset("squad", split="train")
# save it to gcs
import gcsfs
fs = gcsfs.GCSFileSystem(project="my-gs-project")
dataset.save_to_disk("gs://my-bucket/squad", fs=fs)
# try to load it from gcs
from datasets import load_from_disk
dataset2 = load_from_disk("my-bucket/squad", fs=fs)
```
## Expected results
`dataset2` would be a copy of `dataset` but loaded from my bucket.
## Actual results
Long traceback but essentially it's a 404 error from gcsfs saying the object `my-bucket/squad` doesn't exist when this is called:
https://github.com/huggingface/datasets/blob/9c81b7d2e6d9feae69a084a3abda265a4ca07fb5/src/datasets/arrow_dataset.py#L977
This is because there is no actual object called `my-bucket/squad`, there are objects called `my-bucket/squad/dataset.arrow`, etc.
Note that *this* works fine, since it's explicitly saying "download all the objects with this prefix":
```python
fs.download(src_dataset_path + "/*", dataset_path.as_posix(), recursive=True)
```
For example, I can do a workaround this way:
```python
import tempfile
with tempfile.TemporaryDirectory() as temppath:
fs.download("gs://my-bucket/squad/*", temppath)
dataset2 = load_from_disk(temppath)
```
It's unclear to me if it's `gcsfs`'s responsibility to say "hey that's folder not a file, I should try to get objects inside of it not the object itself", or if that's `datasets`'s responsibility... I'm leaning towards the latter since you're never loading a dataset from one file using this function/method, only a dataset folder?
Another minor thing that should maybe should be rolled into this bug...
https://github.com/huggingface/datasets/blob/9c81b7d2e6d9feae69a084a3abda265a4ca07fb5/src/datasets/arrow_dataset.py#L968
These fail if you pass in a `gs://` path, e.g.
```python
dataset2 = load_from_disk("gs://my-bucket/squad", fs=fs)
```
Because at this point, `dataset_info_path` is `gs:/my-bucket/squad/dataset_info.json`, gcsfs throws a:
```
Invalid bucket name: 'gs:'
```
error
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: macOS Big Sur 11.6
- Python version: 3.7.12
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3034/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3033 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3033/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3033/comments | https://api.github.com/repos/huggingface/datasets/issues/3033/events | https://github.com/huggingface/datasets/pull/3033 | 1,016,619,572 | PR_kwDODunzps4std7u | 3,033 | Actual "proper" install of ruamel.yaml in the windows CI | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,456,327,000 | 1,633,456,497,000 | 1,633,456,497,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3033",
"html_url": "https://github.com/huggingface/datasets/pull/3033",
"diff_url": "https://github.com/huggingface/datasets/pull/3033.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3033.patch",
"merged_at": 1633456496000
} | It was impossible to update the package directly with `pip`. Indeed it was installed with `distutils` which prevents `pip` or `conda` to uninstall it.
I had to `rm` a directory from the `site-packages` python directory, and then do `pip install ruamel.yaml`
It's not that "proper" but I couldn't find better solutions | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3033/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3032 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3032/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3032/comments | https://api.github.com/repos/huggingface/datasets/issues/3032/events | https://github.com/huggingface/datasets/issues/3032 | 1,016,488,475 | I_kwDODunzps48lmIb | 3,032 | Error when loading private dataset with "data_files" arg | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"We'll do a release tomorrow or on wednesday to make the fix available :)\r\n\r\nThanks for reproting !"
] | 1,633,448,787,000 | 1,634,052,382,000 | 1,634,052,346,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
Private datasets with no loading script can't be loaded using `data_files` parameter.
## Steps to reproduce the bug
```python
from datasets import load_dataset
data_files = {"train": "**/train/*/*.jsonl", "valid": "**/valid/*/*.jsonl"}
dataset = load_dataset('dalle-mini/encoded', data_files=data_files, use_auth_token=True, streaming=True)
```
Same error happens in non-streaming mode.
## Expected results
Files should be loaded (whether in streaming or not).
## Actual results
Error:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs)
539 try:
--> 540 local_path = cached_path(file_path, download_config=download_config)
541 except FileNotFoundError:
8 frames
FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/dalle-mini/encoded/resolve/main/encoded.py
During handling of the above exception, another exception occurred:
HTTPError Traceback (most recent call last)
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/datasets/dalle-mini/encoded?full=true
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs)
547 except Exception:
548 raise FileNotFoundError(
--> 549 f"Couldn't find a directory or a {resource_type} named '{path}'. "
550 f"It doesn't exist locally at {expected_dir_for_combined_path_abs} or remotely on {hf_api.endpoint}/datasets"
551 )
FileNotFoundError: Couldn't find a directory or a dataset named 'dalle-mini/encoded'. It doesn't exist locally at /content/dalle-mini/encoded or remotely on https://huggingface.co/datasets
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 3.0.0
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3032/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3031 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3031/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3031/comments | https://api.github.com/repos/huggingface/datasets/issues/3031/events | https://github.com/huggingface/datasets/pull/3031 | 1,016,458,496 | PR_kwDODunzps4ss9jn | 3,031 | Align tqdm control with cache control | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Could you add this function to the documentation please ?\r\n\r\nYou can add it in `main_classes.rst`, and maybe add a `Tip` section in the `map` section in the `process.rst`"
] | 1,633,447,129,000 | 1,634,569,221,000 | 1,634,569,170,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3031",
"html_url": "https://github.com/huggingface/datasets/pull/3031",
"diff_url": "https://github.com/huggingface/datasets/pull/3031.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3031.patch",
"merged_at": 1634569170000
} | Currently, once disabled with `disable_progress_bar`, progress bars cannot be re-enabled again. To overcome this limitation, this PR introduces the `set_progress_bar_enabled` function that accepts a boolean indicating whether to display progress bars. The goal is to provide a similar API to the existing cache control API. Following the Zen of Python (π), there should be one and preferably only one obvious way to do it, so I'm also deprecating the aforementioned `disable_progress_bar` function. Additionally, I justify the deprecation with the fact that this function has never been in the docs.
Moreover, similar API changes have recently been introduced to [`tfds`](https://github.com/tensorflow/datasets/blob/a1e8b98f45b0214082b546cc967c67c43fffda55/tensorflow_datasets/core/utils/tqdm_utils.py#L98-L112).
Considering the popularity of the [comment](https://github.com/huggingface/datasets/issues/1627#issuecomment-751383559) I made a while ago, this API (`set_progress_bar_enabled` and `is_progress_bar_enabled`) should be mentioned in the docs, but I'm not sure where to put it exactly. Maybe we can replace the `logging_methods` page under `package_reference` with `utility_methods` and then introduce two subsections on that page: `Logging methods` and `tqdm control`.
Additionally, this PR:
* adds the `disable_tqdm` keyword arg of `Dataset._map_single` to the `ignore_kwargs` list to ignore it when computing the fingerprint (forgot to add it in #2696)
* deletes the unused components in `tqdm_utils.py`, which seem to be inherited from `tfds`
* disables the tqdm output in the test suite. As I see it, this output doesn't seem informative, but let me know if this is not a good idea | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3031/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3030 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3030/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3030/comments | https://api.github.com/repos/huggingface/datasets/issues/3030/events | https://github.com/huggingface/datasets/pull/3030 | 1,016,435,324 | PR_kwDODunzps4ss41W | 3,030 | Add `remove_columns` to `IterableDataset` | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks ! That looks all good :)\r\n\r\nI don't think that batching would help. Indeed we're dealing with python iterators that yield elements one by one, so batched `map` needs to accumulate a batch, apply the function, and then yield examples from the batch.\r\n\r\nThough once we have parallel processing in `map`, we can reconsider it\r\n\r\nAlso feel free to check the CI failure - apparently the import of `Union` is missing",
"Thanks for the review and explaining that! \r\nOn top of what you said, I think `remove_columns` is very unlikely to be a bottleneck, so it doesn't matter anyways.",
"Thank you for reviewing! @mariosasko \r\n\r\nI wonder how the checking would work. Is there any checking present in `IterableDataset ` now? What if `.remove_columns()` is applied after some arbitrary `.map()`?",
"> I wonder how the checking would work. Is there any checking present in IterableDataset now? What if .remove_columns() is applied after some arbitrary .map()?\r\n\r\nThat's the challenge here indeed ^^ In this case it's not trivial to know the names of the columns. Feel free to open an issue so we can discuss this"
] | 1,633,445,913,000 | 1,633,707,195,000 | 1,633,707,113,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3030",
"html_url": "https://github.com/huggingface/datasets/pull/3030",
"diff_url": "https://github.com/huggingface/datasets/pull/3030.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3030.patch",
"merged_at": 1633707113000
} | Fixes #2944
WIP
* Not tested yet.
* We might want to allow batched remove for efficiency.
@lhoestq Do you think it should have `batched=` and `batch_size=`? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3030/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3029 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3029/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3029/comments | https://api.github.com/repos/huggingface/datasets/issues/3029/events | https://github.com/huggingface/datasets/pull/3029 | 1,016,389,901 | PR_kwDODunzps4ssvkr | 3,029 | Use standard open-domain validation split in nq_open | {
"login": "craffel",
"id": 417568,
"node_id": "MDQ6VXNlcjQxNzU2OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/417568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/craffel",
"html_url": "https://github.com/craffel",
"followers_url": "https://api.github.com/users/craffel/followers",
"following_url": "https://api.github.com/users/craffel/following{/other_user}",
"gists_url": "https://api.github.com/users/craffel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/craffel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/craffel/subscriptions",
"organizations_url": "https://api.github.com/users/craffel/orgs",
"repos_url": "https://api.github.com/users/craffel/repos",
"events_url": "https://api.github.com/users/craffel/events{/privacy}",
"received_events_url": "https://api.github.com/users/craffel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I had to run datasets-cli with --ignore_verifications the first time since it was complaining about a missing file, but now it runs without that flag fine. I moved dummy_data.zip to the new folder, but also had to modify the filename of the test file in the zip (should I not have done that?). Finally, I added the pretty name tag.",
"Great, thanks for the help."
] | 1,633,443,567,000 | 1,633,445,806,000 | 1,633,445,805,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3029",
"html_url": "https://github.com/huggingface/datasets/pull/3029",
"diff_url": "https://github.com/huggingface/datasets/pull/3029.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3029.patch",
"merged_at": 1633445805000
} | The nq_open dataset originally drew the validation set from this file:
https://github.com/google-research-datasets/natural-questions/blob/master/nq_open/NQ-open.efficientqa.dev.1.1.sample.jsonl
However, that's the dev set used specifically and only for the efficientqa competition, and it's not the same dev set as is used in every open-domain question answering paper (including the Lee et al paper that introduced the open-domain variant of NQ, cited at the top of the dataset file). This PR changes nq_open to use the standard validation split and bumps the version to 2.0.0 since this is a breaking change. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3029/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3028 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3028/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3028/comments | https://api.github.com/repos/huggingface/datasets/issues/3028/events | https://github.com/huggingface/datasets/pull/3028 | 1,016,230,272 | PR_kwDODunzps4ssO4s | 3,028 | Properly install ruamel-yaml for windows CI | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq I would say this does not \"properly\" install `ruamel-yaml`, but the contrary, you overwrite the previous version without desinstalling it first.\r\n\r\nAccording to `pip` docs:\r\n> This can break your system if the existing package is of a different version or was installed with a different package manager!\r\n\r\nNote that our case fulfills both conditions:\r\n- the installing version (`0.17.16`) is different from the existing one (`0.15.87`)\r\n- you are installing using `pip` (`setuptools`), whereas the exisitng version was installed using `distutils`\r\n\r\nThat is why I did not fix the issue this way, made a hotfix pinning `huggingface_hub` (#3025), while looking for a permanent solution for the issue.",
"Yea I did this because we need the latest version of `huggingface_hub` for #2986 and because I didn't want to ssh to the windows worker x)\r\nMaybe it can be fixed by installing it with conda - let me try",
"Oh yea it may not work since it was first installed with distutils"
] | 1,633,434,675,000 | 1,633,442,532,000 | 1,633,434,682,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3028",
"html_url": "https://github.com/huggingface/datasets/pull/3028",
"diff_url": "https://github.com/huggingface/datasets/pull/3028.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3028.patch",
"merged_at": 1633434682000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3028/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3027 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3027/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3027/comments | https://api.github.com/repos/huggingface/datasets/issues/3027/events | https://github.com/huggingface/datasets/issues/3027 | 1,016,150,117 | I_kwDODunzps48kThl | 3,027 | Resolve data_files by split name | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Awesome @lhoestq I like the proposal and it works great on my JSON community dataset. Here is the [log](https://gist.github.com/vblagoje/714babc325bcbdd5de579fd8e1648892). ",
"From my discussion with @borisdayma it would be more general the files match if their paths contains the split name - not only if the filename contains the split name. For example for a dataset like this:\r\n```\r\ntrain/\r\nβββ data.csv\r\ntest/\r\nβββ data.csv\r\n```\r\n\r\nBut IMO the default should be \r\n```\r\ndata/\r\nβββ train.csv\r\nβββ test.csv\r\n```\r\nbecause it allows people to have other directories if they have different subsets of their data (different configurations, not splits)",
"I just created a PR for this at https://github.com/huggingface/datasets/pull/3221, let me know what you think :)"
] | 1,633,429,476,000 | 1,636,134,598,000 | 1,636,134,597,000 | MEMBER | null | null | null | This issue is about discussing the default behavior when someone loads a dataset that consists in data files. For example:
```python
load_dataset("lhoestq/demo1")
```
should return two splits "train" and "test" since the dataset repostiory is like
```
data/
βββ train.csv
βββ test.csv
```
Currently it returns only one split "train" which contains the data of both files
I started playing with this idea on this branch btw: `resolve-data_files-by-split-name`
Basically the idea is that if you named you data files after split names then the default pattern is
```python
{
"train": ["*train*"],
"test": ["*test*"],
"validation": ["*dev*", "valid"],
}
```
otherwise it's
```python
{
"train": ["*"]
}
```
Let me know what you think !
cc @albertvillanova @LysandreJik @vblagoje | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3027/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3027/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3026 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3026/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3026/comments | https://api.github.com/repos/huggingface/datasets/issues/3026/events | https://github.com/huggingface/datasets/pull/3026 | 1,016,067,794 | PR_kwDODunzps4srtyc | 3,026 | added arxiv paper inswiss_judgment_prediction dataset card | {
"login": "JoelNiklaus",
"id": 3775944,
"node_id": "MDQ6VXNlcjM3NzU5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoelNiklaus",
"html_url": "https://github.com/JoelNiklaus",
"followers_url": "https://api.github.com/users/JoelNiklaus/followers",
"following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}",
"gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions",
"organizations_url": "https://api.github.com/users/JoelNiklaus/orgs",
"repos_url": "https://api.github.com/users/JoelNiklaus/repos",
"events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoelNiklaus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,424,521,000 | 1,633,708,904,000 | 1,633,708,884,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3026",
"html_url": "https://github.com/huggingface/datasets/pull/3026",
"diff_url": "https://github.com/huggingface/datasets/pull/3026.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3026.patch",
"merged_at": 1633708884000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3026/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3025 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3025/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3025/comments | https://api.github.com/repos/huggingface/datasets/issues/3025/events | https://github.com/huggingface/datasets/pull/3025 | 1,016,061,222 | PR_kwDODunzps4srsgG | 3,025 | Fix Windows test suite | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,424,122,000 | 1,633,427,908,000 | 1,633,427,907,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3025",
"html_url": "https://github.com/huggingface/datasets/pull/3025",
"diff_url": "https://github.com/huggingface/datasets/pull/3025.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3025.patch",
"merged_at": 1633427907000
} | Try a hotfix to restore Windows test suite.
Fix #3024. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3025/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3024 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3024/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3024/comments | https://api.github.com/repos/huggingface/datasets/issues/3024/events | https://github.com/huggingface/datasets/issues/3024 | 1,016,052,911 | I_kwDODunzps48j7yv | 3,024 | Windows test suite fails | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,633,423,606,000 | 1,633,427,907,000 | 1,633,427,907,000 | MEMBER | null | null | null | ## Describe the bug
There is an error during installation of tests dependencies for Windows: https://app.circleci.com/pipelines/github/huggingface/datasets/7981/workflows/9b6a0114-2b8e-4069-94e5-e844dbbdba4e/jobs/49206
```
ERROR: Cannot uninstall 'ruamel-yaml'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3024/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3023 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3023/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3023/comments | https://api.github.com/repos/huggingface/datasets/issues/3023/events | https://github.com/huggingface/datasets/pull/3023 | 1,015,923,031 | PR_kwDODunzps4srQ4i | 3,023 | Fix typo | {
"login": "qqaatw",
"id": 24835382,
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqaatw",
"html_url": "https://github.com/qqaatw",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,413,971,000 | 1,633,435,015,000 | 1,633,435,015,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3023",
"html_url": "https://github.com/huggingface/datasets/pull/3023",
"diff_url": "https://github.com/huggingface/datasets/pull/3023.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3023.patch",
"merged_at": 1633435015000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3023/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3022/comments | https://api.github.com/repos/huggingface/datasets/issues/3022/events | https://github.com/huggingface/datasets/pull/3022 | 1,015,750,221 | PR_kwDODunzps4sqve6 | 3,022 | MeDAL dataset: Add further description and update download URL | {
"login": "xhlulu",
"id": 21180505,
"node_id": "MDQ6VXNlcjIxMTgwNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/21180505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xhlulu",
"html_url": "https://github.com/xhlulu",
"followers_url": "https://api.github.com/users/xhlulu/followers",
"following_url": "https://api.github.com/users/xhlulu/following{/other_user}",
"gists_url": "https://api.github.com/users/xhlulu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xhlulu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xhlulu/subscriptions",
"organizations_url": "https://api.github.com/users/xhlulu/orgs",
"repos_url": "https://api.github.com/users/xhlulu/repos",
"events_url": "https://api.github.com/users/xhlulu/events{/privacy}",
"received_events_url": "https://api.github.com/users/xhlulu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq I'm a bit confused by the error message. I haven't touched the YAML code at all - do you have any insight on that?",
"I just added the missing `pretty_name` tag in the YAML - sorry about that ;)",
"Thanks! Seems like it did the trick since the tests are passing. Let me know if there's anything else I can do in this PR!",
"It's all good thank you :)\r\n\r\nmerging !"
] | 1,633,392,808,000 | 1,634,115,789,000 | 1,634,115,789,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3022",
"html_url": "https://github.com/huggingface/datasets/pull/3022",
"diff_url": "https://github.com/huggingface/datasets/pull/3022.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3022.patch",
"merged_at": 1634115789000
} | Added more details in the following sections:
* Dataset Structure
* Data Instances
* Data Splits
* Source Data
* Annotations
* Discussions of Biases
* LIcensing Information | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3022/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3021/comments | https://api.github.com/repos/huggingface/datasets/issues/3021/events | https://github.com/huggingface/datasets/pull/3021 | 1,015,444,094 | PR_kwDODunzps4spzJU | 3,021 | Support loading dataset from multiple zipped CSV data files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,368,837,000 | 1,633,509,406,000 | 1,633,509,405,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3021",
"html_url": "https://github.com/huggingface/datasets/pull/3021",
"diff_url": "https://github.com/huggingface/datasets/pull/3021.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3021.patch",
"merged_at": 1633509405000
} | Fix partially #3018.
CC: @lewtun | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3021/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3021/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3020/comments | https://api.github.com/repos/huggingface/datasets/issues/3020/events | https://github.com/huggingface/datasets/pull/3020 | 1,015,406,105 | PR_kwDODunzps4sprfa | 3,020 | Add a metric for the MATH dataset (competition_math). | {
"login": "hacobe",
"id": 91226467,
"node_id": "MDQ6VXNlcjkxMjI2NDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/91226467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hacobe",
"html_url": "https://github.com/hacobe",
"followers_url": "https://api.github.com/users/hacobe/followers",
"following_url": "https://api.github.com/users/hacobe/following{/other_user}",
"gists_url": "https://api.github.com/users/hacobe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hacobe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hacobe/subscriptions",
"organizations_url": "https://api.github.com/users/hacobe/orgs",
"repos_url": "https://api.github.com/users/hacobe/repos",
"events_url": "https://api.github.com/users/hacobe/events{/privacy}",
"received_events_url": "https://api.github.com/users/hacobe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I believe the only failed test related to this PR is tests/test_metric_common.py::LocalMetricTest::test_load_metric_competition_math. It gives the following error:\r\n\r\nImportError: To be able to use this dataset, you need to install the following dependencies['math_equivalence'] using 'pip install git+https://github.com/hendrycks/math.git' for instance'\r\n\r\nIt fails along with (these fail with ImportError as well):\r\ntest_load_metric_bertscore\r\ntest_load_metric_bleurt\r\ntest_load_metric_comet\r\ntest_load_metric_coval\r\n\r\nLet me know if there is anything I need to change.",
"Hi ! The script looks all good thanks :)\r\n\r\nTo fix the CI you just need to merge `master` into your branch\r\n```\r\ngit fetch upstream/master\r\ngit merge upstream/master\r\n```\r\n\r\nThen you also need to add `math_equivalence` to the list of git packages installed for the tests in `additional-tests-requirements.txt`\r\nhttps://github.com/huggingface/datasets/blob/ba831e4bcd175ae3d52afbf7d12c4f625bf541b0/additional-tests-requirements.txt#L1-L3",
"I ran:\r\n\r\ngit fetch upstream\r\ngit merge upstream/master\r\n\r\nAnd I also added math_equivalence to the list of git packages installed for the tests in additional-tests-requirements.txt\r\n\r\ntests/test_metric_common.py fails with the same errors as before. tests/test_dataset_cards.py also fails, but it doesn't look related to this PR (it's an issue datasets/ami/README.md).",
"@lhoestq Anything else I can do? I re-merged again and am getting the same test failures as described in the previous comment."
] | 1,633,366,336,000 | 1,634,898,571,000 | 1,634,898,571,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3020",
"html_url": "https://github.com/huggingface/datasets/pull/3020",
"diff_url": "https://github.com/huggingface/datasets/pull/3020.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3020.patch",
"merged_at": 1634898571000
} | This metric computes accuracy for the MATH dataset (https://arxiv.org/abs/2103.03874) after canonicalizing the prediction and the reference (e.g., converting "1/2" to "\\\\frac{1}{2}"). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3020/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3019 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3019/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3019/comments | https://api.github.com/repos/huggingface/datasets/issues/3019/events | https://github.com/huggingface/datasets/pull/3019 | 1,015,339,983 | PR_kwDODunzps4speOB | 3,019 | Fix filter leaking | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,362,178,000 | 1,635,847,098,000 | 1,633,422,787,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3019",
"html_url": "https://github.com/huggingface/datasets/pull/3019",
"diff_url": "https://github.com/huggingface/datasets/pull/3019.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3019.patch",
"merged_at": 1633422787000
} | If filter is called after using a first transform `shuffle`, `select`, `shard`, `train_test_split`, or `filter`, then it could not work as expected and return examples from before the first transform. This is because the indices mapping was not taken into account when saving the indices to keep when doing the filtering
Affected versions: 1.12.0 and 1.12.1
This should fix issue https://github.com/huggingface/datasets/issues/3010 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3019/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3018 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3018/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3018/comments | https://api.github.com/repos/huggingface/datasets/issues/3018/events | https://github.com/huggingface/datasets/issues/3018 | 1,015,311,877 | I_kwDODunzps48hG4F | 3,018 | Support multiple zipped CSV data files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@lhoestq I would like to draw your attention to the proposed API by @lewtun, using `data_dir` to pass the ZIP URL.\r\n\r\nI'm not totally convinced with this... What do you think?\r\n\r\nMaybe we could discuss other approaches...\r\n\r\nOne brainstorming idea: what about using URL chaining with the hop operator in `data_files`?",
"`data_dir` is currently exclusively used for manually downloaded data.\r\n\r\nMaybe we can have an API that only uses data_files as you are suggesting, using URL chaining ?\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nurl = \"https://domain.org/filename.zip\"\r\ndata_files = {\"train\": \"zip://train_filename.csv::\" + url, \"test\": \"zip://test_filename.csv::\" + url}\r\ndataset = load_dataset(\"csv\", data_files=data_files)\r\n```\r\n\r\nURL chaining is used by `fsspec` to get access to files in nested filesystems of any kind. Since `fsspec` is being used by `pandas`, `dask` and also extensively by `datasets` I think it would be nice to use it here too",
"URL chaining sounds super nice to me! And it's also a nice way to leverage the same concepts we currently have in the docs around `fsspec` :)"
] | 1,633,360,619,000 | 1,633,444,377,000 | null | MEMBER | null | null | null | As requested by @lewtun, support loading multiple zipped CSV data files.
```python
from datasets import load_dataset
url = "https://domain.org/filename.zip"
data_files = {"train": "train_filename.csv", "test": "test_filename.csv"}
dataset = load_dataset("csv", data_dir=url, data_files=data_files)
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3018/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3018/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3017 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3017/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3017/comments | https://api.github.com/repos/huggingface/datasets/issues/3017/events | https://github.com/huggingface/datasets/pull/3017 | 1,015,215,528 | PR_kwDODunzps4spE9m | 3,017 | Remove unused parameter in xdirname | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,355,753,000 | 1,633,433,821,000 | 1,633,433,820,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3017",
"html_url": "https://github.com/huggingface/datasets/pull/3017",
"diff_url": "https://github.com/huggingface/datasets/pull/3017.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3017.patch",
"merged_at": 1633433820000
} | Minor fix to remove unused args `*p` in `xdirname`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3017/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3016/comments | https://api.github.com/repos/huggingface/datasets/issues/3016/events | https://github.com/huggingface/datasets/pull/3016 | 1,015,208,654 | PR_kwDODunzps4spDlX | 3,016 | Fix Windows paths in LJ Speech dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,355,377,000 | 1,633,360,985,000 | 1,633,360,984,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3016",
"html_url": "https://github.com/huggingface/datasets/pull/3016",
"diff_url": "https://github.com/huggingface/datasets/pull/3016.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3016.patch",
"merged_at": 1633360984000
} | Minor fix in LJ Speech dataset for Windows pathname component separator.
Related to #1878. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3016/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3015/comments | https://api.github.com/repos/huggingface/datasets/issues/3015/events | https://github.com/huggingface/datasets/pull/3015 | 1,015,130,845 | PR_kwDODunzps4so0GX | 3,015 | Extend support for streaming datasets that use glob.glob | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,351,357,000 | 1,633,441,599,000 | 1,633,441,598,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3015",
"html_url": "https://github.com/huggingface/datasets/pull/3015",
"diff_url": "https://github.com/huggingface/datasets/pull/3015.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3015.patch",
"merged_at": 1633441598000
} | This PR extends the support in streaming mode for datasets that use `glob`, by patching the function `glob.glob`.
Related to #2880, #2876, #2874 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3015/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3014/comments | https://api.github.com/repos/huggingface/datasets/issues/3014/events | https://github.com/huggingface/datasets/pull/3014 | 1,015,070,751 | PR_kwDODunzps4son8A | 3,014 | Fix Windows path in MATH dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,347,667,000 | 1,633,351,604,000 | 1,633,351,604,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3014",
"html_url": "https://github.com/huggingface/datasets/pull/3014",
"diff_url": "https://github.com/huggingface/datasets/pull/3014.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3014.patch",
"merged_at": 1633351604000
} | Minor fix in MATH dataset for Windows pathname component separator.
Related to #2982. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3014/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3013/comments | https://api.github.com/repos/huggingface/datasets/issues/3013/events | https://github.com/huggingface/datasets/issues/3013 | 1,014,960,419 | I_kwDODunzps48fxEj | 3,013 | Improve `get_dataset_infos`? | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | null | [] | null | [
"To keeps things simple maybe we should use `load_dataset_builder` in `get_dataset_infos`.\r\n`load_dataset_builder` instantiates a builder and runs the _infos() method in order to give you the most up-to-date infos, even if the dataset_infos.json is outdated or missing."
] | 1,633,340,824,000 | 1,634,895,369,000 | null | CONTRIBUTOR | null | null | null | Using the dedicated function `get_dataset_infos` on a dataset that has no dataset-info.json file returns an empty info:
```
>>> from datasets import get_dataset_infos
>>> get_dataset_infos('wit')
{}
```
While it's totally possible to get it (regenerate it) with:
```
>>> from datasets import load_dataset_builder
>>> builder = load_dataset_builder('wit')
>>> builder.info
DatasetInfo(description='Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset. WIT is composed of a curated set\n of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages. Its\n size enables WIT to be used as a pretraining dataset for multimodal machine learning models.\n', citation='@article{srinivasan2021wit,\n title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},\n author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},\n journal={arXiv preprint arXiv:2103.01913},\n year={2021}\n}\n', homepage='https://github.com/google-research-datasets/wit', license='', features={'b64_bytes': Value(dtype='string', id=None), 'embedding': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'image_url': Value(dtype='string', id=None), 'metadata_url': Value(dtype='string', id=None), 'original_height': Value(dtype='int32', id=None), 'original_width': Value(dtype='int32', id=None), 'mime_type': Value(dtype='string', id=None), 'caption_attribution_description': Value(dtype='string', id=None), 'wit_features': Sequence(feature={'language': Value(dtype='string', id=None), 'page_url': Value(dtype='string', id=None), 'attribution_passes_lang_id': Value(dtype='string', id=None), 'caption_alt_text_description': Value(dtype='string', id=None), 'caption_reference_description': Value(dtype='string', id=None), 'caption_title_and_reference_description': Value(dtype='string', id=None), 'context_page_description': Value(dtype='string', id=None), 'context_section_description': Value(dtype='string', id=None), 'hierarchical_section_title': Value(dtype='string', id=None), 'is_main_image': Value(dtype='string', id=None), 'page_changed_recently': Value(dtype='string', id=None), 'page_title': Value(dtype='string', id=None), 'section_title': Value(dtype='string', id=None)}, length=-1, id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name='wit', config_name='default', version=0.0.0, splits=None, download_checksums=None, download_size=None, post_processing_size=None, dataset_size=None, size_in_bytes=None)
```
Should we test if info is empty, and in that case regenerate it? Or always generate it? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3013/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3012 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3012/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3012/comments | https://api.github.com/repos/huggingface/datasets/issues/3012/events | https://github.com/huggingface/datasets/pull/3012 | 1,014,958,931 | PR_kwDODunzps4soRTu | 3,012 | Replace item with float in metrics | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,340,728,000 | 1,633,347,034,000 | 1,633,347,033,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3012",
"html_url": "https://github.com/huggingface/datasets/pull/3012",
"diff_url": "https://github.com/huggingface/datasets/pull/3012.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3012.patch",
"merged_at": 1633347033000
} | As pointed out by @mariosasko in #3001, calling `float()` instad of `.item()` is faster.
Moreover, it might avoid potential issues if any of the third-party functions eventually returns a `float` instead of an `np.float64`.
Related to #3001. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3012/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3011 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3011/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3011/comments | https://api.github.com/repos/huggingface/datasets/issues/3011/events | https://github.com/huggingface/datasets/issues/3011 | 1,014,935,713 | I_kwDODunzps48frCh | 3,011 | load_dataset_builder should error if "name" does not exist? | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | null | [] | null | [
"Yes I think it should raise an error. Currently it looks like it instantiates a custom configuration with the name given by the user:\r\nhttps://github.com/huggingface/datasets/blob/ba27ce33bf568374cf23a07669fdd875b5718bc2/src/datasets/builder.py#L391-L397"
] | 1,633,339,246,000 | 1,634,895,369,000 | null | CONTRIBUTOR | null | null | null | ```
import datasets as ds
builder = ds.load_dataset_builder('sent_comp', name="doesnotexist")
builder.info.config_name
```
returns
```
'doesnotexist'
```
Shouldn't it raise an error instead?
For this dataset, the only valid values for `name` should be: `"default"` or `None` (ie. argument not passed) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3011/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3010 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3010/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3010/comments | https://api.github.com/repos/huggingface/datasets/issues/3010/events | https://github.com/huggingface/datasets/issues/3010 | 1,014,918,470 | I_kwDODunzps48fm1G | 3,010 | Chain filtering is leaking | {
"login": "DrMatters",
"id": 22641583,
"node_id": "MDQ6VXNlcjIyNjQxNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DrMatters",
"html_url": "https://github.com/DrMatters",
"followers_url": "https://api.github.com/users/DrMatters/followers",
"following_url": "https://api.github.com/users/DrMatters/following{/other_user}",
"gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions",
"organizations_url": "https://api.github.com/users/DrMatters/orgs",
"repos_url": "https://api.github.com/users/DrMatters/repos",
"events_url": "https://api.github.com/users/DrMatters/events{/privacy}",
"received_events_url": "https://api.github.com/users/DrMatters/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"### Update:\r\nI wrote a bit cleaner code snippet (without transforming to json) that can expose leaking.\r\n```python\r\nimport datasets\r\nimport json\r\n\r\nitems = ['ab', 'c', 'df']\r\n\r\nds = datasets.Dataset.from_dict({'col': items})\r\nprint(list(ds))\r\n# > Prints: [{'col': 'ab'}, {'col': 'c'}, {'col': 'df'}]\r\n\r\nfiltered = ds\r\n\r\n# get all items that are starting with a character with ascii code bigger than 'a'\r\nfiltered = filtered.filter(lambda x: x['col'][0] > 'a', load_from_cache_file=False)\r\nprint(list(filtered))\r\n# > Prints: [{'col': 'c'}, {'col': 'df'}] as expected\r\n\r\n# get all items that are shorter than 2\r\nfiltered = filtered.filter(lambda x: len(x['col']) < 2, load_from_cache_file=False)\r\nprint(list(filtered))\r\n# > Prints: [{'col': 'ab'}] -> this is a leaked item from the first filter\r\n# > Should be: [{'col': 'c'}]\r\n```",
"Thanks for reporting. I'm looking into it",
"I just pushed a fix ! We'll do a new release soon.\r\nIn the meantime feel free to install `datasets` from source to play with it",
"Thanks, I'm already using it from your branch!"
] | 1,633,338,295,000 | 1,633,422,968,000 | null | NONE | null | null | null | ## Describe the bug
As there's no support for lists within dataset fields, I convert my lists to json-string format. However, the bug described is occurring even when the data format is 'string'.
These samples show that filtering behavior diverges from what's expected when chaining filterings.
On sample 2 the second filtering leads to "leaking" of data that should've been filtered on the first filtering into the results.
## Steps to reproduce the bug
Sample 1:
```python
import datasets
import json
items = [[1, 2], [3], [4]]
jsoned_items = map(json.dumps, [[1, 2], [3], [4]])
ds = datasets.Dataset.from_dict({'a': jsoned_items})
print(list(ds))
# > Prints: [{'a': '[1, 2]'}, {'a': '[3]'}, {'a': '[4]'}] as expected
filtered = ds
# get all lists that are shorter than 2
filtered = filtered.filter(lambda x: len(json.loads(x['a'])) < 2, load_from_cache_file=False)
print(list(filtered))
# > Prints: [{'a': '[3]'}, {'a': '[4]'}] as expected
# get all lists, which have a value bigger than 3 on its zero index
filtered = filtered.filter(lambda x: json.loads(x['a'])[0] > 3, load_from_cache_file=False)
print(list(filtered))
# > Should be: [{'a': [4]}]
# > Prints: [{'a': [3]}]
```
Sample 2:
```python
import datasets
import json
items = [[1, 2], [3], [4]]
jsoned_items = map(json.dumps, [[1, 2], [3], [4]])
ds = datasets.Dataset.from_dict({'a': jsoned_items})
print(list(ds))
# > Prints: [{'a': '[1, 2]'}, {'a': '[3]'}, {'a': '[4]'}]
filtered = ds
# get all lists, which have a value bigger than 3 on its zero index
filtered = filtered.filter(lambda x: json.loads(x['a'])[0] > 3, load_from_cache_file=False)
print(list(filtered))
# > Prints: [{'a': '[4]'}] as expected
# get all lists that are shorter than 2
filtered = filtered.filter(lambda x: len(json.loads(x['a'])) < 2, load_from_cache_file=False)
print(list(filtered))
# > Prints: [{'a': '[1, 2]'}]
# > Should be: [{'a': '[4]'}] (remain intact)
```
## Expected results
Expected and actual results are attached to the code snippets.
## Actual results
Expected and actual results are attached to the code snippets.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3010/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3009 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3009/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3009/comments | https://api.github.com/repos/huggingface/datasets/issues/3009/events | https://github.com/huggingface/datasets/pull/3009 | 1,014,868,235 | PR_kwDODunzps4sn_YG | 3,009 | Fix Windows paths in SUPERB benchmark datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,335,229,000 | 1,633,355,005,000 | 1,633,355,005,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3009",
"html_url": "https://github.com/huggingface/datasets/pull/3009",
"diff_url": "https://github.com/huggingface/datasets/pull/3009.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3009.patch",
"merged_at": 1633355004000
} | Minor fix in SUPERB benchmark datasets for Windows pathname component separator.
Related to #2884, #2783 and #2619. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3009/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3008 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3008/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3008/comments | https://api.github.com/repos/huggingface/datasets/issues/3008/events | https://github.com/huggingface/datasets/pull/3008 | 1,014,849,163 | PR_kwDODunzps4sn7iU | 3,008 | Fix precision/recall metrics with None average | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,334,055,000 | 1,633,339,777,000 | 1,633,339,776,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3008",
"html_url": "https://github.com/huggingface/datasets/pull/3008",
"diff_url": "https://github.com/huggingface/datasets/pull/3008.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3008.patch",
"merged_at": 1633339776000
} | Related to issue #2979 and PR #2992. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3008/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3007 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3007/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3007/comments | https://api.github.com/repos/huggingface/datasets/issues/3007/events | https://github.com/huggingface/datasets/pull/3007 | 1,014,775,450 | PR_kwDODunzps4sns-n | 3,007 | Correct a typo | {
"login": "Yann21",
"id": 35955430,
"node_id": "MDQ6VXNlcjM1OTU1NDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/35955430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yann21",
"html_url": "https://github.com/Yann21",
"followers_url": "https://api.github.com/users/Yann21/followers",
"following_url": "https://api.github.com/users/Yann21/following{/other_user}",
"gists_url": "https://api.github.com/users/Yann21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yann21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yann21/subscriptions",
"organizations_url": "https://api.github.com/users/Yann21/orgs",
"repos_url": "https://api.github.com/users/Yann21/repos",
"events_url": "https://api.github.com/users/Yann21/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yann21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,328,147,000 | 1,633,339,677,000 | 1,633,339,677,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3007",
"html_url": "https://github.com/huggingface/datasets/pull/3007",
"diff_url": "https://github.com/huggingface/datasets/pull/3007.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3007.patch",
"merged_at": 1633339677000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3007/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3006/comments | https://api.github.com/repos/huggingface/datasets/issues/3006/events | https://github.com/huggingface/datasets/pull/3006 | 1,014,770,821 | PR_kwDODunzps4snsBm | 3,006 | Fix Windows paths in CommonLanguage dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,327,738,000 | 1,633,338,478,000 | 1,633,338,478,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3006",
"html_url": "https://github.com/huggingface/datasets/pull/3006",
"diff_url": "https://github.com/huggingface/datasets/pull/3006.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3006.patch",
"merged_at": 1633338478000
} | Minor fix in CommonLanguage dataset for Windows pathname component separator.
Related to #2989. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3006/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3005 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3005/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3005/comments | https://api.github.com/repos/huggingface/datasets/issues/3005/events | https://github.com/huggingface/datasets/issues/3005 | 1,014,615,420 | I_kwDODunzps48ec18 | 3,005 | DatasetDict.filter and Dataset.filter crashes with any "fn_kwargs" argument | {
"login": "DrMatters",
"id": 22641583,
"node_id": "MDQ6VXNlcjIyNjQxNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DrMatters",
"html_url": "https://github.com/DrMatters",
"followers_url": "https://api.github.com/users/DrMatters/followers",
"following_url": "https://api.github.com/users/DrMatters/following{/other_user}",
"gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions",
"organizations_url": "https://api.github.com/users/DrMatters/orgs",
"repos_url": "https://api.github.com/users/DrMatters/repos",
"events_url": "https://api.github.com/users/DrMatters/events{/privacy}",
"received_events_url": "https://api.github.com/users/DrMatters/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @DrMatters, thanks for reporting.\r\n\r\nThis issue was fixed 14 days ago: #2950.\r\n\r\nCurrently, the fix is only in the master branch and will be made available in our next library release.\r\n\r\nIn the meantime, you can incorporate the fix by installing datasets from the master branch:\r\n```shell\r\npip install -U git+ssh://git@github.com/huggingface/datasets.git@master#egg=datasest\r\n```\r\nor\r\n```shell\r\npip install -U git+https://github.com/huggingface/datasets.git@master#egg=datasets\r\n```",
"Thanks, sorry for bothering"
] | 1,633,308,569,000 | 1,633,947,481,000 | 1,633,337,173,000 | NONE | null | null | null | ## Describe the bug
The ".filter" method of DatasetDict or Dataset objects fails when passing any "fn_kwargs" argument
## Steps to reproduce the bug
```python
import datasets
example_dataset = datasets.Dataset.from_dict({"a": {1, 2, 3, 4}})
def filter_value(example, value):
return example['a'] == value
filtered = example_dataset.filter(filter_value, fn_kwargs={'value': 3})
```
## Expected results
`filtered` is a dataset containing {"a": {3}}
## Actual results
> Traceback (most recent call last):
> File "C:\Users\qsemi\Documents\git\nlp_experiments\gpt_celebrity\src\test_faulty_filter.py", line 8, in <module>
> filtered = example_dataset.filter(filter_value, fn_kwargs={'value': 3})
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 185, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\fingerprint.py", line 398, in wrapper
> out = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 2169, in filter
> indices = self.map(
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 1686, in map
> return self._map_single(
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 185, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\fingerprint.py", line 398, in wrapper
> out = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 2048, in _map_single
> batch = apply_function_on_filtered_inputs(
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 1939, in apply_function_on_filtered_inputs
> function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
> TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'value'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3005/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3004/comments | https://api.github.com/repos/huggingface/datasets/issues/3004/events | https://github.com/huggingface/datasets/pull/3004 | 1,014,336,617 | PR_kwDODunzps4smfPF | 3,004 | LexGLUE: A Benchmark Dataset for Legal Language Understanding in English. | {
"login": "iliaschalkidis",
"id": 1626984,
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iliaschalkidis",
"html_url": "https://github.com/iliaschalkidis",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Please wait until Tuesday. Arxiv pre-print is pending. π€ ",
"Hi @lhoestq, I updated the README with the Arxiv publication info and now the tests are not passing.\r\n\r\nIt seems that the error is completely irrelevant to my code:\r\n\r\n```\r\n Attempting uninstall: ruamel.yaml\r\n Found existing installation: ruamel-yaml 0.15.87\r\nERROR: Cannot uninstall 'ruamel-yaml'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.\r\n```",
"Hi ! Thanks for adding this one. Amazing work :o \r\n\r\nYea you can ignore the `ruamel-yaml` error, it's unrelated to your PR and fixed on `master`",
"Hi @lhoestq, \r\n\r\n- I fetched and merged the upstream master to get the `ruamel-yaml` fix.\r\n- I updated the README.md file including more information for the inputs and labels, while I also turned all tables in HTML format hoping that they will render nicely in the dataset card in the HF website.\r\n- I also simplified the CaseHOLD input, excl. the unused `question` field and the `context` replicas, as suggested.\r\n"
] | 1,633,255,405,000 | 1,634,132,222,000 | 1,634,132,221,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3004",
"html_url": "https://github.com/huggingface/datasets/pull/3004",
"diff_url": "https://github.com/huggingface/datasets/pull/3004.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3004.patch",
"merged_at": 1634132221000
} | Inspired by the recent widespread use of the GLUE multi-task benchmark NLP dataset (Wang et al., 2018), the subsequent more difficult SuperGLUE (Wang et al., 2019), other previous multi-task NLP benchmarks (Conneau and Kiela, 2018; McCann et al., 2018), and similar initiatives in other domains (Peng et al., 2019), we introduce the Legal General Language Understanding Evaluation (LexGLUE) benchmark, a benchmark dataset to evaluate the performance of NLP methods in legal tasks. LexGLUE is based on seven existing legal NLP datasets, selected using criteria largely from SuperGLUE.
As in GLUE and SuperGLUE (Wang et al., 2019b,a), one of our goals is to push towards generic (or βfoundationβ) models that can cope with multiple NLP tasks, in our case legal NLP tasks possibly with limited task-specific fine-tuning. Another goal is to provide a convenient and informative entry point for NLP researchers and practitioners wishing to explore or develop methods for legalNLP. Having these goals in mind, the datasets we include in LexGLUE and the tasks they address have been simplified in several ways to make it easier for newcomers and generic models to address all tasks.
LexGLUE benchmark is accompanied by experimental infrastructure that relies on Hugging Face Transformers library and resides at: https://github.com/coastalcph/lex-glue. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3004/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3004/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3003 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3003/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3003/comments | https://api.github.com/repos/huggingface/datasets/issues/3003/events | https://github.com/huggingface/datasets/pull/3003 | 1,014,137,933 | PR_kwDODunzps4smExP | 3,003 | common_language: Fix license in README.md | {
"login": "jimregan",
"id": 227350,
"node_id": "MDQ6VXNlcjIyNzM1MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/227350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jimregan",
"html_url": "https://github.com/jimregan",
"followers_url": "https://api.github.com/users/jimregan/followers",
"following_url": "https://api.github.com/users/jimregan/following{/other_user}",
"gists_url": "https://api.github.com/users/jimregan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jimregan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jimregan/subscriptions",
"organizations_url": "https://api.github.com/users/jimregan/orgs",
"repos_url": "https://api.github.com/users/jimregan/repos",
"events_url": "https://api.github.com/users/jimregan/events{/privacy}",
"received_events_url": "https://api.github.com/users/jimregan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,200,457,000 | 1,633,339,621,000 | 1,633,339,621,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3003",
"html_url": "https://github.com/huggingface/datasets/pull/3003",
"diff_url": "https://github.com/huggingface/datasets/pull/3003.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3003.patch",
"merged_at": 1633339621000
} | ...it's correct elsewhere | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3003/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3002 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3002/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3002/comments | https://api.github.com/repos/huggingface/datasets/issues/3002/events | https://github.com/huggingface/datasets/pull/3002 | 1,014,120,524 | PR_kwDODunzps4smCNO | 3,002 | Remove a reference to the open Arrow file when deleting a TF dataset created with to_tf_dataset | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq The test passes even without the try/except block!",
"Hey, I'm a little late because I was caught up in the course work, but I double-checked this and it looks great. Thanks for fixing!"
] | 1,633,196,649,000 | 1,634,125,680,000 | 1,634,115,803,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3002",
"html_url": "https://github.com/huggingface/datasets/pull/3002",
"diff_url": "https://github.com/huggingface/datasets/pull/3002.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3002.patch",
"merged_at": 1634115803000
} | This [comment](https://github.com/huggingface/datasets/issues/2934#issuecomment-922970919) explains the issue. This PR fixes that with a `weakref` callback, and additionally:
* renames `TensorflowDatasetMixIn` to `TensorflowDatasetMixin` for consistency
* correctly indents `TensorflowDatasetMixin`'s docstring
* replaces `tf.data.AUTOTUNE` with `tf.data.experimental.AUTOTUNE` (we support TF>=2.2 according to the [setup.py](https://github.com/huggingface/datasets/blob/fc46bba66ba4f432cc10501c16a677112e13984c/setup.py#L188) and `AUTOTUNE` has been moved to the experimental part of `tf.data` in 1.X if I'm not mistaken)
Fixes #2934 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3002/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3001/comments | https://api.github.com/repos/huggingface/datasets/issues/3001/events | https://github.com/huggingface/datasets/pull/3001 | 1,014,024,982 | PR_kwDODunzps4sl0BY | 3,001 | Fix cast to Python scalar in Matthews Correlation metric | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,175,099,000 | 1,633,341,244,000 | 1,633,339,572,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3001",
"html_url": "https://github.com/huggingface/datasets/pull/3001",
"diff_url": "https://github.com/huggingface/datasets/pull/3001.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3001.patch",
"merged_at": 1633339572000
} | This PR is motivated by issue #2964.
The Matthews Correlation metric relies on sklearn's `matthews_corrcoef` function to compute the result. This function returns either `float` or `np.float64` (see the [source](https://github.com/scikit-learn/scikit-learn/blob/844b4be24d20fc42cc13b957374c718956a0db39/sklearn/metrics/_classification.py#L906-L909)). Obviously, calling `.item()` on the float value will fail, so I'm fixing this with the built-in `float()` function, which covers both cases. Surprisingly, on my machine, casting `np.float64` to a Python scalar with `float()` is even faster than with the `.item()` method. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3001/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3000/comments | https://api.github.com/repos/huggingface/datasets/issues/3000/events | https://github.com/huggingface/datasets/pull/3000 | 1,013,613,219 | PR_kwDODunzps4skusL | 3,000 | Fix json loader when conversion not implemented | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"And we're already at PR number 3,000 ! ^^",
"Thank you so much for fixing this @lhoestq π ! I just tested the branch out and it works like a charm!"
] | 1,633,110,442,000 | 1,633,111,500,000 | 1,633,110,863,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3000",
"html_url": "https://github.com/huggingface/datasets/pull/3000",
"diff_url": "https://github.com/huggingface/datasets/pull/3000.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3000.patch",
"merged_at": 1633110863000
} | Sometimes the arrow json parser fails if the `block_size` is too small and returns an `ArrowNotImplementedError: JSON conversion to struct...` error.
By increasing the block size it makes it work again.
Hopefully it should help with https://github.com/huggingface/datasets/issues/2799
I tried with the file mentioned in the issue and it worked for me
cc @lewtun can you try again from this branch ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3000/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3000/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2999/comments | https://api.github.com/repos/huggingface/datasets/issues/2999/events | https://github.com/huggingface/datasets/pull/2999 | 1,013,536,933 | PR_kwDODunzps4skgCm | 2,999 | Set trivia_qa writer batch size | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,105,406,000 | 1,633,106,095,000 | 1,633,106,095,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2999",
"html_url": "https://github.com/huggingface/datasets/pull/2999",
"diff_url": "https://github.com/huggingface/datasets/pull/2999.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2999.patch",
"merged_at": 1633106095000
} | Save some RAM when generating trivia_qa | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2999/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2998 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2998/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2998/comments | https://api.github.com/repos/huggingface/datasets/issues/2998/events | https://github.com/huggingface/datasets/issues/2998 | 1,013,372,871 | I_kwDODunzps48ZtfH | 2,998 | cannot shuffle dataset loaded from disk | {
"login": "pya25",
"id": 54274249,
"node_id": "MDQ6VXNlcjU0Mjc0MjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/54274249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pya25",
"html_url": "https://github.com/pya25",
"followers_url": "https://api.github.com/users/pya25/followers",
"following_url": "https://api.github.com/users/pya25/following{/other_user}",
"gists_url": "https://api.github.com/users/pya25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pya25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pya25/subscriptions",
"organizations_url": "https://api.github.com/users/pya25/orgs",
"repos_url": "https://api.github.com/users/pya25/repos",
"events_url": "https://api.github.com/users/pya25/events{/privacy}",
"received_events_url": "https://api.github.com/users/pya25/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,633,096,192,000 | 1,633,096,192,000 | null | NONE | null | null | null | ## Describe the bug
dataset loaded from disk cannot be shuffled.
## Steps to reproduce the bug
```
my_dataset = load_from_disk('s3://my_file/validate', fs=s3)
sample = my_dataset.select(range(100)).shuffle(seed=1234)
```
## Actual results
```
sample = my_dataset .select(range(100)).shuffle(seed=1234)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2494, in shuffle
new_fingerprint=new_fingerprint,
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2303, in select
tmp_file = tempfile.NamedTemporaryFile("wb", dir=os.path.dirname(indices_cache_file_name), delete=False)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/tempfile.py", line 547, in NamedTemporaryFile
(fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/tempfile.py", line 258, in _mkstemp_inner
fd = _os.open(file, flags, 0o600)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpnnu5uhnx/my_file/validate/tmpy76d70g4'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Python version: 3.7
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2998/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2997/comments | https://api.github.com/repos/huggingface/datasets/issues/2997/events | https://github.com/huggingface/datasets/issues/2997 | 1,013,270,069 | I_kwDODunzps48ZUY1 | 2,997 | Dataset has incorrect labels | {
"login": "marshmellow77",
"id": 63367770,
"node_id": "MDQ6VXNlcjYzMzY3Nzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/63367770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marshmellow77",
"html_url": "https://github.com/marshmellow77",
"followers_url": "https://api.github.com/users/marshmellow77/followers",
"following_url": "https://api.github.com/users/marshmellow77/following{/other_user}",
"gists_url": "https://api.github.com/users/marshmellow77/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marshmellow77/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marshmellow77/subscriptions",
"organizations_url": "https://api.github.com/users/marshmellow77/orgs",
"repos_url": "https://api.github.com/users/marshmellow77/repos",
"events_url": "https://api.github.com/users/marshmellow77/events{/privacy}",
"received_events_url": "https://api.github.com/users/marshmellow77/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @marshmellow77, thanks for reporting.\r\n\r\nThat issue is fixed since `datasets` version 1.9.0 (see 16bc665f2753677c765011ef79c84e55486d4347).\r\n\r\nPlease, update `datasets` with: `pip install -U datasets`",
"Thanks. Please note that the dataset explorer (https://huggingface.co/datasets/viewer/?dataset=turkish_product_reviews) still shows the incorrect state. The sentiment for the first few customer reviews is actually negative and should be labelled with \"0\", see screenshot:\r\n\r\n![Capture1](https://user-images.githubusercontent.com/63367770/135637150-93d9b09b-f1dd-4701-97a5-5cb2672ec0c7.PNG)\r\n\r\n\r\n",
"Thanks @marshmellow77, good catch! I'm transferring this issue to https://github.com/huggingface/datasets-viewer. "
] | 1,633,090,146,000 | 1,633,102,320,000 | 1,633,096,474,000 | NONE | null | null | null | The dataset https://huggingface.co/datasets/turkish_product_reviews has incorrect labels - all reviews are labelled with "1" (positive sentiment). None of the reviews is labelled with "0". See screenshot attached:
![Capture](https://user-images.githubusercontent.com/63367770/135617428-14ce0b27-5208-4e66-a3ee-71542e3257b4.PNG)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2997/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2996 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2996/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2996/comments | https://api.github.com/repos/huggingface/datasets/issues/2996/events | https://github.com/huggingface/datasets/pull/2996 | 1,013,266,373 | PR_kwDODunzps4sjrP6 | 2,996 | Remove all query parameters when extracting protocol | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Beware of cases like: `http://ufal.ms.mff.cuni.cz/umc/005-en-ur/download.php?f=umc005-corpus.zip` or `gzip://bg-cs.xml::https://opus.nlpl.eu/download.php?f=Europarl/v8/xml/bg-cs.xml.gz`. I see these URLs in the errors (https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading?collection=@huggingface/datasets), but not in the `Extraction protocol for file at xxx is not implemented yet` error, so I'm not sure if they would break now or not.\r\n\r\nMaybe: first try to find an extension, and if none, try to remove the `?...` part and retry to find the extension.\r\n\r\nBy the way, here is the list of URLs for errors of this type, with a '?' in the URL:\r\n\r\n```\r\nhttps://dl.orangedox.com/WyaCpL?dl=1\r\nhttps://drive.google.com/u/0/uc?id=0Bz8a_Dbh9QhbaW12WVVZS2drcnM&export=download\r\nhttps://drive.google.com/u/0/uc?id=1-CaP3xHgZxOGjQ3pXC5tr9YnIajmel-t&export=download\r\nhttps://drive.google.com/u/0/uc?id=11EBGHMAswT5JDO60xh7gnZfYjpMQs7h7&export=download\r\nhttps://drive.google.com/u/0/uc?id=13JCCr-IjZK7uhbLXeufptr_AxvsKinVl&export=download\r\nhttps://drive.google.com/u/0/uc?id=13ZyFc2qepAYSg9WIFaeJ9y402gblsl2e&export=download\r\nhttps://drive.google.com/u/0/uc?id=15auwrFAlq52JJ61u7eSfnhT9rZtI5sjk&export=download\r\nhttps://drive.google.com/u/0/uc?id=16OgJ_OrfzUF_i3ftLjFn9kpcyoi7UJeO&export=download\r\nhttps://drive.google.com/u/0/uc?id=1BFYF05rx-DK9Eb5hgoIgd6EcB8zOI-zu&export=download\r\nhttps://drive.google.com/u/0/uc?id=1Cz1Un9p8Xn9IpEMMrg2kXSDt0dnjxc4z&export=download\r\nhttps://drive.google.com/u/0/uc?id=1H7FphKVVCYoH49sUXl79CuztEfJLaKoF&export=download\r\nhttps://drive.google.com/u/0/uc?id=1NAeuWLgYBzLwU5jCdkrtj4_PRUocuvlb&export=download\r\nhttps://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download\r\nhttps://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download/\r\nhttps://drive.google.com/u/0/uc?id=1R1jR4DcH2UEaM1ZwDSRHdfTGvkCNu6NW&export=download\r\nhttps://drive.google.com/u/0/uc?id=1hDHeoFIfQzJec1NgZNXh3CTNbchiIvuG&export=download\r\nhttps://drive.google.com/u/0/uc?id=1wxwqnWGRzwvc_-ugRoFX8BPgpO3Q7sch&export=download\r\nhttps://drive.google.com/u/0/uc?id=1ydsOTvBZXKqcRvXawOuePrJ99slOEbkk&export=download\r\nhttps://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\r\nhttps://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k\r\nhttps://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0\r\nhttps://drive.google.com/uc?export=download&id=0Bz8a_Dbh9Qhbd2JNdDBsQUdocVU\r\nhttps://drive.google.com/uc?export=download&id=1-w-0uqaC6hnRn1F_3XqJEvi09zlcTIhX\r\nhttps://drive.google.com/uc?export=download&id=11wMGqNVSwwk6zUnDaJEgm3qT71kAHeff\r\nhttps://drive.google.com/uc?export=download&id=17FGi8KI9N9SuGe7elM8qU8_3fx4sfgTr\r\nhttps://drive.google.com/uc?export=download&id=1AHUm1-_V9GCtGuDcc8XrMUCJE8B-HHoL\r\nhttps://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U\r\nhttps://drive.google.com/uc?export=download&id=1Ev4RqWcPsLI9rgOGAKh-_dFKqcEZ1u-G\r\nhttps://drive.google.com/uc?export=download&id=1GTHUJxxmjLmG2lnF9dwRgIDRFZaOY3-F\r\nhttps://drive.google.com/uc?export=download&id=1GcUN6mytEcOMBBOvjJOQzBmEkc-LdgQg\r\nhttps://drive.google.com/uc?export=download&id=1J3mucMFTWrgAYa3LuBZoLRR3CzzYD3fa\r\nhttps://drive.google.com/uc?export=download&id=1Jjhbal535VVz2ap4v4r_rN1UEHTdLK5P\r\nhttps://drive.google.com/uc?export=download&id=1L7aoUXzHPzyzQ0ns4ApBbYepsjFOtXil\r\nhttps://drive.google.com/uc?export=download&id=1M1M5yIOyjKWGprc3LUeVVwxgKXxgpqxm\r\nhttps://drive.google.com/uc?export=download&id=1Nug7-Sri50mkJL4GrWw6C2ZIbfeU-6Am\r\nhttps://drive.google.com/uc?export=download&id=1PGa8j1_IqxiGTc3SU6NMB38sAzxCPS34\r\nhttps://drive.google.com/uc?export=download&id=1QsV8C5EPJrQl37mwva_5-IJOrCaOi2tH\r\nhttps://drive.google.com/uc?export=download&id=1RsGLINVce-0GsDkCLDuLZmoLuzfmoCuQ\r\nhttps://drive.google.com/uc?export=download&id=1TuWH7uwu6V90QWmZn25qhou1rm97Egmn\r\nhttps://drive.google.com/uc?export=download&id=1U7WdBpd9kJ85S7BbBhWUSiy9NnXrKdO6\r\nhttps://drive.google.com/uc?export=download&id=1USoQ8lJgN8kAWnUnRrupMGrPMLlDVqlV\r\nhttps://drive.google.com/uc?export=download&id=1Uit4Og1pk-br_0UJIO5sdhApyhTuHzqo\r\nhttps://drive.google.com/uc?export=download&id=1Z2ty5hU0tIGRZRDlFQZLO7b5vijRfvo0\r\nhttps://drive.google.com/uc?export=download&id=1ZyFGufe4puX3vjGPbp4xg9Hca3Gwq22g\r\nhttps://drive.google.com/uc?export=download&id=1ZzlIQvw1KNBG97QQCfdatvVrrbeLaM1u\r\nhttps://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0\r\nhttps://drive.google.com/uc?export=download&id=1__EjA6oZsgXQpggPm-h54jZu3kP6Y6zu\r\nhttps://drive.google.com/uc?export=download&id=1aHPVfC5TrlnUjehtagVZoDfq4VccgaNT\r\nhttps://drive.google.com/uc?export=download&id=1cqu_YAgvlyVSzzjcUyP1Cz7q0k8Pw7vN\r\nhttps://drive.google.com/uc?export=download&id=1dUIqVwvoZAtbX_-z5axCoe97XNcFo1No\r\nhttps://drive.google.com/uc?export=download&id=1eTtRs5cUlBP5dXsx-FTAlmXuB6JQi2qj\r\nhttps://drive.google.com/uc?export=download&id=1fUR3MqJ8jTMka6owA0S-Fe6aHmiophc_\r\nhttps://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF\r\nhttps://drive.google.com/uc?export=download&id=1g89WgFHMRbr4QrvA0ngh26PY081Nv3lx\r\nhttps://drive.google.com/uc?export=download&id=1meSNZHxd_0TZLKCRCYGN-Ke3IA5c1qOE\r\nhttps://drive.google.com/uc?export=download&id=1okwGJiOZmTpNRNgJLCnjFF4Q0H1z4l6_\r\nhttps://drive.google.com/uc?export=download&id=1phryJg4FjCFkn0mSCqIOP2-FscAeKGV0\r\nhttps://drive.google.com/uc?export=download&id=1s8NSFT4Kz0caKZ4VybPNzt88F8ZanprY\r\nhttps://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C\r\nhttps://drive.google.com/uc?export=download&id=1ytVZ4AhubFDOEL7o7XrIRIyhU8g9wvKA\r\nhttps://drive.google.com/uc?id=12Uz59TYg_NtxOy7SXraYeXPMRT7oaO7X\r\nhttps://drive.google.com/uc?id=1PGH5H_oW7wUvMw_5xaXvbEN7DFll-wDX\r\nhttps://github.com/MaazAmjad/Datasets-for-Urdu-news/blob/master/Urdu%20Fake%20News%20Dataset.zip?raw=true\r\nhttps://github.com/TevenLeScao/glucose/blob/master/GLUCOSE_training_data.zip?raw=true\r\nhttps://github.com/TevenLeScao/what-time-is-it/blob/master/gutenberg_time_phrases.zip?raw=true\r\nhttps://github.com/aviaefrat/cryptonite/blob/main/data/cryptonite-official-split.zip?raw=true\r\nhttps://github.com/facebookresearch/Imppres/blob/master/dataset/IMPPRES.zip?raw=true\r\nhttps://github.com/ljos/navnkjenner/blob/master/data/bokmaal/no_bokmaal-ud-train.bioes?raw=true\r\nhttps://github.com/ljos/navnkjenner/blob/master/data/nynorsk/no_nynorsk-ud-train.bioes?raw=true\r\nhttps://github.com/ljos/navnkjenner/blob/master/data/samnorsk/no_samnorsk-ud-train.bioes?raw=true\r\nhttps://github.com/mirfan899/Urdu/blob/master/sentiment/imdb_urdu_reviews.csv.tar.gz?raw=true\r\nhttps://github.com/omilab/Neural-Sentiment-Analyzer-for-Modern-Hebrew/blob/master/data/morph_train.tsv?raw=true\r\nhttps://github.com/omilab/Neural-Sentiment-Analyzer-for-Modern-Hebrew/blob/master/data/token_train.tsv?raw=true\r\nhttps://lindat.mff.cuni.cz/repository/xmlui/bitstream/handle/11858/00-097C-0000-0023-625F-0/hindencorp05.plaintext.gz?sequence=3&isAllowed=y\r\nhttps://repo.sadilar.org/bitstream/handle/20.500.12185/299/nchlt_afrikaans_named_entity_annotated_corpus.zip?sequence=3&isAllowed=y\r\nhttps://repo.sadilar.org/bitstream/handle/20.500.12185/312/nchlt_isixhosa_named_entity_annotated_corpus.zip?sequence=3&isAllowed=y\r\nhttps://repo.sadilar.org/bitstream/handle/20.500.12185/319/nchlt_isizulu_named_entity_annotated_corpus.zip?sequence=3&isAllowed=y\r\nhttps://repo.sadilar.org/bitstream/handle/20.500.12185/328/nchlt_sepedi_named_entity_annotated_corpus.zip?sequence=3&isAllowed=y\r\nhttps://repo.sadilar.org/bitstream/handle/20.500.12185/334/nchlt_sesotho_named_entity_annotated_corpus.zip?sequence=3&isAllowed=y\r\nhttps://repo.sadilar.org/bitstream/handle/20.500.12185/341/nchlt_setswana_named_entity_annotated_corpus.zip?sequence=3&isAllowed=y\r\nhttps://repo.sadilar.org/bitstream/handle/20.500.12185/346/nchlt_siswati_named_entity_annotated_corpus.zip?sequence=3&isAllowed=y\r\nhttps://www.dropbox.com/s/tohrsllcfy7rch4/SimpleQuestions_v2.tgz?dl=1\r\nhttps://zenodo.org/record/1043504/files/corpus-webis-tldr-17.zip?download=1\r\nhttps://zenodo.org/record/1489920/files/articles-training-byarticle-20181122.zip?download=1\r\nhttps://zenodo.org/record/1489920/files/articles-training-bypublisher-20181122.zip?download=1\r\nhttps://zenodo.org/record/2787612/files/SICK.zip?download=1\r\nhttps://zenodo.org/record/3553423/files/Swahili%20data.zip?download=1\r\nhttps://zenodo.org/record/3707949/files/tapaco_v1.0.zip?download=1\r\nhttps://zenodo.org/record/4300294/files/train.csv?download=1\r\n```\r\n\r\n",
"Hi @severo, I just saw your comment. Thank you.\r\n\r\nFinally I just swapped the 2 parsings: first I extract extension and then I remove query parameters. π ",
"OK :) Maybe we should add some unit tests to ensure we improve the detection without regressions (it's Friday afternoon, I trust the unit tests more than my analysis of the code)",
"Great! For the tests, I think we should also add some URLs in the form: `http://ufal.ms.mff.cuni.cz/umc/005-en-ur/download.php?f=umc005-corpus.zip` to be sure they are still correctly detected."
] | 1,633,089,934,000 | 1,633,337,293,000 | 1,633,337,293,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2996",
"html_url": "https://github.com/huggingface/datasets/pull/2996",
"diff_url": "https://github.com/huggingface/datasets/pull/2996.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2996.patch",
"merged_at": 1633337293000
} | Fix `_get_extraction_protocol` to remove all query parameters, like `?raw=true`, `?dl=1`,... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2996/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2995 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2995/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2995/comments | https://api.github.com/repos/huggingface/datasets/issues/2995/events | https://github.com/huggingface/datasets/pull/2995 | 1,013,143,868 | PR_kwDODunzps4sjThd | 2,995 | Fix trivia_qa unfiltered | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"CI fails due to missing tags, but they will be added in https://github.com/huggingface/datasets/pull/2949"
] | 1,633,082,023,000 | 1,633,082,651,000 | 1,633,082,650,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2995",
"html_url": "https://github.com/huggingface/datasets/pull/2995",
"diff_url": "https://github.com/huggingface/datasets/pull/2995.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2995.patch",
"merged_at": 1633082650000
} | Fix https://github.com/huggingface/datasets/issues/2993 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2995/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2994 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2994/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2994/comments | https://api.github.com/repos/huggingface/datasets/issues/2994/events | https://github.com/huggingface/datasets/pull/2994 | 1,013,000,475 | PR_kwDODunzps4si4I2 | 2,994 | Fix loading compressed CSV without streaming | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,073,339,000 | 1,633,103,596,000 | 1,633,103,596,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2994",
"html_url": "https://github.com/huggingface/datasets/pull/2994",
"diff_url": "https://github.com/huggingface/datasets/pull/2994.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2994.patch",
"merged_at": 1633103595000
} | When implementing support to stream CSV files (https://github.com/huggingface/datasets/commit/ad489d4597381fc2d12c77841642cbeaecf7a2e0#diff-6f60f8d0552b75be8b3bfd09994480fd60dcd4e7eb08d02f721218c3acdd2782), a regression was introduced preventing loading compressed CSV files in non-streaming mode.
This PR fixes it, allowing loading compressed/uncompressed CSV files in streaming/non-streaming mode.
Fix #2977. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2994/timeline | null | true |