url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.21B
| node_id
stringlengths 18
32
| number
int64 1
6.76k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
sequencelengths 0
30
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
float64 | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | draft
float64 0
1
⌀ | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6657 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6657/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6657/comments | https://api.github.com/repos/huggingface/datasets/issues/6657/events | https://github.com/huggingface/datasets/issues/6657 | 2,129,147,085 | I_kwDODunzps5-6DTN | 6,657 | Release not pushed to conda channel | {
"avatar_url": "https://avatars.githubusercontent.com/u/7138162?v=4",
"events_url": "https://api.github.com/users/atulsaurav/events{/privacy}",
"followers_url": "https://api.github.com/users/atulsaurav/followers",
"following_url": "https://api.github.com/users/atulsaurav/following{/other_user}",
"gists_url": "https://api.github.com/users/atulsaurav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/atulsaurav",
"id": 7138162,
"login": "atulsaurav",
"node_id": "MDQ6VXNlcjcxMzgxNjI=",
"organizations_url": "https://api.github.com/users/atulsaurav/orgs",
"received_events_url": "https://api.github.com/users/atulsaurav/received_events",
"repos_url": "https://api.github.com/users/atulsaurav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/atulsaurav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atulsaurav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/atulsaurav"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"",
""
] | "2024-02-11T20:05:17Z" | "2024-03-06T15:06:22Z" | "2024-03-06T15:06:22Z" | NONE | null | ### Describe the bug
The github actions step to publish the release 2.17.0 to conda channel has failed due to expired token. Can some one please update the anaconda token rerun the failed action? @albertvillanova ?
![image](https://github.com/huggingface/datasets/assets/7138162/1b56ad3d-7643-4778-9cce-4bf531717700)
### Steps to reproduce the bug
Please see this actions [link](https://github.com/huggingface/datasets/actions/runs/7842473662)
### Expected behavior
The action runs successfully and the latest release is pushed to HuggingFace conda channel
### Environment info
Not applicable. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6657/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6657/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6656 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6656/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6656/comments | https://api.github.com/repos/huggingface/datasets/issues/6656/events | https://github.com/huggingface/datasets/issues/6656 | 2,127,338,377 | I_kwDODunzps5-zJuJ | 6,656 | Error when loading a big local json file | {
"avatar_url": "https://avatars.githubusercontent.com/u/10062216?v=4",
"events_url": "https://api.github.com/users/Riccorl/events{/privacy}",
"followers_url": "https://api.github.com/users/Riccorl/followers",
"following_url": "https://api.github.com/users/Riccorl/following{/other_user}",
"gists_url": "https://api.github.com/users/Riccorl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Riccorl",
"id": 10062216,
"login": "Riccorl",
"node_id": "MDQ6VXNlcjEwMDYyMjE2",
"organizations_url": "https://api.github.com/users/Riccorl/orgs",
"received_events_url": "https://api.github.com/users/Riccorl/received_events",
"repos_url": "https://api.github.com/users/Riccorl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Riccorl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Riccorl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Riccorl"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-02-09T15:14:21Z" | "2024-03-15T22:18:21Z" | null | NONE | null | ### Describe the bug
When trying to load big json files from a local directory, `load_dataset` throws the following error
```
Traceback (most recent call last):
File "/miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/builder.py", line 1989, in _prepare_split_single
writer.write_table(table)
File "miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/arrow_writer.py", line 573, in write_table
pa_table = pa_table.combine_chunks()
File "pyarrow/table.pxi", line 3638, in pyarrow.lib.Table.combine_chunks
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
```
### Steps to reproduce the bug
1. Download a big file, e.g. `https://dl.fbaipublicfiles.com/dpr/data/retriever/biencoder-nq-train.json.gz`
2. Load it like `data = load_dataset("json", data_files=["nq-train.json"], split="train")`
```python
from datasets import load_dataset
data = load_dataset("json", data_files=["nq-train.json"], split="train")
```
A similarly formatted but smaller file, e.g. e.g. `https://dl.fbaipublicfiles.com/dpr/data/retriever/biencoder-nq-dev.json.gz` is loaded without issues
```python
from datasets import load_dataset
data = load_dataset("json", data_files=["nq-dev.json"], split="train")
```
### Expected behavior
It should load normally
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6656/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6656/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6655 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6655/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6655/comments | https://api.github.com/repos/huggingface/datasets/issues/6655/events | https://github.com/huggingface/datasets/issues/6655 | 2,127,020,042 | I_kwDODunzps5-x8AK | 6,655 | Cannot load the dataset go_emotions | {
"avatar_url": "https://avatars.githubusercontent.com/u/688324?v=4",
"events_url": "https://api.github.com/users/arame/events{/privacy}",
"followers_url": "https://api.github.com/users/arame/followers",
"following_url": "https://api.github.com/users/arame/following{/other_user}",
"gists_url": "https://api.github.com/users/arame/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arame",
"id": 688324,
"login": "arame",
"node_id": "MDQ6VXNlcjY4ODMyNA==",
"organizations_url": "https://api.github.com/users/arame/orgs",
"received_events_url": "https://api.github.com/users/arame/received_events",
"repos_url": "https://api.github.com/users/arame/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arame/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arame/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arame"
} | [] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"",
""
] | "2024-02-09T12:15:39Z" | "2024-02-12T09:35:55Z" | null | NONE | null | ### Describe the bug
When I run the following code I get an exception;
`go_emotions = load_dataset("go_emotions")`
> AttributeError Traceback (most recent call last)
Cell In[6], [line 1](vscode-notebook-cell:?execution_count=6&line=1)
----> [1](vscode-notebook-cell:?execution_count=6&line=1) go_emotions = load_dataset("go_emotions")
[2](vscode-notebook-cell:?execution_count=6&line=2) data = go_emotions.data
File [c:\Users\hijik\anaconda3\Lib\site-packages\datasets\load.py:2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
[2518](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2518) verification_mode = VerificationMode(
[2519](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2519) (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
[2520](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2520) )
[2522](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2522) # Create a dataset builder
-> [2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523) builder_instance = load_dataset_builder(
[2524](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2524) path=path,
[2525](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2525) name=name,
[2526](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2526) data_dir=data_dir,
[2527](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2527) data_files=data_files,
[2528](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2528) cache_dir=cache_dir,
[2529](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2529) features=features,
[2530](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2530) download_config=download_config,
[2531](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2531) download_mode=download_mode,
[2532](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2532) revision=revision,
[2533](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2533) token=token,
[2534](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2534) storage_options=storage_options,
[2535](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2535) trust_remote_code=trust_remote_code,
[2536](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2536) _require_default_config_name=name is None,
...
---> [63](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:63) if issubclass(obj_type, transformers.PreTrainedTokenizerBase):
[64](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:64) pklregister(obj_type)(_save_transformersPreTrainedTokenizerBase)
[66](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:66) # Unwrap `torch.compile`-ed functions
AttributeError: module 'transformers' has no attribute 'PreTrainedTokenizerBase'
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?10bc0728-6947-456e-9a3e-f056872b04c6) or open in a [text editor](command:workbench.action.openLargeOutput?10bc0728-6947-456e-9a3e-f056872b04c6). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Steps to reproduce the bug
```
from datasets import load_dataset
go_emotions = load_dataset("go_emotions")
```
### Expected behavior
Should simply load the variable with the data from the file
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.16.1
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.11.4
- `huggingface_hub` version: 0.20.3
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.10.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6655/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6655/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6654 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6654/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6654/comments | https://api.github.com/repos/huggingface/datasets/issues/6654/events | https://github.com/huggingface/datasets/issues/6654 | 2,126,939,358 | I_kwDODunzps5-xoTe | 6,654 | Batched dataset map throws exception that cannot cast fixed length array to Sequence | {
"avatar_url": "https://avatars.githubusercontent.com/u/1029671?v=4",
"events_url": "https://api.github.com/users/keesjandevries/events{/privacy}",
"followers_url": "https://api.github.com/users/keesjandevries/followers",
"following_url": "https://api.github.com/users/keesjandevries/following{/other_user}",
"gists_url": "https://api.github.com/users/keesjandevries/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/keesjandevries",
"id": 1029671,
"login": "keesjandevries",
"node_id": "MDQ6VXNlcjEwMjk2NzE=",
"organizations_url": "https://api.github.com/users/keesjandevries/orgs",
"received_events_url": "https://api.github.com/users/keesjandevries/received_events",
"repos_url": "https://api.github.com/users/keesjandevries/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/keesjandevries/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keesjandevries/subscriptions",
"type": "User",
"url": "https://api.github.com/users/keesjandevries"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-02-09T11:23:19Z" | "2024-02-12T08:26:53Z" | "2024-02-12T08:26:53Z" | NONE | null | ### Describe the bug
I encountered a TypeError when batch processing a dataset with Sequence features in datasets package version 2.16.1. The error arises from a mismatch in handling fixed-size list arrays during the map function execution. Debugging pinpoints the issue to an if-statement in datasets/table.py, line 2093, failing to correctly process sequence lengths.
### Steps to reproduce the bug
Create virtual environment and activate
```
virtualenv venv
source venv/bin/activate
```
Then install the datasets package (I'm using the latest version)
```
pip install datasets==2.16.1
```
Then run
```python
# bug.py
from datasets import Dataset
from datasets.features import Features, Sequence, Value
data = {
"num": [[1, 2], [3, 4]],
}
features = Features({'num': Sequence(feature=Value(dtype='int32'), length=2)})
dataset = Dataset.from_dict(data, features=features)
dataset.map(lambda x: x, batched=True, batch_size=1)
```
### Expected behavior
I get the following stack trace
```
Map: 50%|█████ | 1/2 [00:00<00:00, 423.92 examples/s]
Traceback (most recent call last):
File "/PATH/TO/BUG_PORT/bug.py", line 9, in <module>
dataset.map(lambda x: x, batched=True, batch_size=1)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3093, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3489, in _map_single
writer.write_batch(batch)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 551, in write_batch
array = cast_array_to_feature(col_values, col_type) if col_type is not None else col_values
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 2111, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
fixed_size_list<item: int32>[2]
to
Sequence(feature=Value(dtype='int32', id=None), length=2, id=None)
```
After some debugging, I found that the if-statement that is actually failing is line 2093 in `datasets/table.py`
```python
# datasets/table.py
...
2093 if feature.length * len(array) == len(array_values):
2094 return pa.FixedSizeListArray.from_arrays(_c(array_values, feature.feature), feature.length)
...
```
### Environment info
Platform: MacOS
Datasets version: datasets==2.16.1
Python version: 3.9.6 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6654/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6654/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6653 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6653/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6653/comments | https://api.github.com/repos/huggingface/datasets/issues/6653/events | https://github.com/huggingface/datasets/pull/6653 | 2,126,831,929 | PR_kwDODunzps5mdv5S | 6,653 | Set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-02-09T10:12:02Z" | "2024-02-09T10:18:20Z" | "2024-02-09T10:12:12Z" | MEMBER | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6653/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6653/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6653.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6653",
"merged_at": "2024-02-09T10:12:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6653.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6653"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6652 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6652/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6652/comments | https://api.github.com/repos/huggingface/datasets/issues/6652/events | https://github.com/huggingface/datasets/pull/6652 | 2,126,760,798 | PR_kwDODunzps5mdgcv | 6,652 | Release: 2.17.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-02-09T09:25:01Z" | "2024-02-09T10:11:48Z" | "2024-02-09T10:05:35Z" | MEMBER | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6652/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6652/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6652.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6652",
"merged_at": "2024-02-09T10:05:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6652.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6652"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6651 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6651/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6651/comments | https://api.github.com/repos/huggingface/datasets/issues/6651/events | https://github.com/huggingface/datasets/issues/6651 | 2,126,649,626 | I_kwDODunzps5-whka | 6,651 | Slice splits support for datasets.load_from_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/37439882?v=4",
"events_url": "https://api.github.com/users/mhorlacher/events{/privacy}",
"followers_url": "https://api.github.com/users/mhorlacher/followers",
"following_url": "https://api.github.com/users/mhorlacher/following{/other_user}",
"gists_url": "https://api.github.com/users/mhorlacher/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mhorlacher",
"id": 37439882,
"login": "mhorlacher",
"node_id": "MDQ6VXNlcjM3NDM5ODgy",
"organizations_url": "https://api.github.com/users/mhorlacher/orgs",
"received_events_url": "https://api.github.com/users/mhorlacher/received_events",
"repos_url": "https://api.github.com/users/mhorlacher/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mhorlacher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mhorlacher/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mhorlacher"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"",
""
] | "2024-02-09T08:00:21Z" | "2024-02-09T08:00:21Z" | null | NONE | null | ### Feature request
Support for slice splits in `datasets.load_from_disk`, similar to how it's already supported for `datasets.load_dataset`. See https://www.nature.com/articles/s41551-023-01093-3.
### Motivation
Slice splits are convienient in a numer of cases - adding support to `datasets.load_from_disk` would make working with local datasets easier and homogenize the APIs of load_from_disk and load_dataset.
### Your contribution
Sure, if the devs think the feature request is sensible. | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6651/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6651/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6650 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6650/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6650/comments | https://api.github.com/repos/huggingface/datasets/issues/6650/events | https://github.com/huggingface/datasets/issues/6650 | 2,125,680,991 | I_kwDODunzps5-s1Ff | 6,650 | AttributeError: 'InMemoryTable' object has no attribute '_batches' | {
"avatar_url": "https://avatars.githubusercontent.com/u/13874772?v=4",
"events_url": "https://api.github.com/users/matsuobasho/events{/privacy}",
"followers_url": "https://api.github.com/users/matsuobasho/followers",
"following_url": "https://api.github.com/users/matsuobasho/following{/other_user}",
"gists_url": "https://api.github.com/users/matsuobasho/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/matsuobasho",
"id": 13874772,
"login": "matsuobasho",
"node_id": "MDQ6VXNlcjEzODc0Nzcy",
"organizations_url": "https://api.github.com/users/matsuobasho/orgs",
"received_events_url": "https://api.github.com/users/matsuobasho/received_events",
"repos_url": "https://api.github.com/users/matsuobasho/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/matsuobasho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matsuobasho/subscriptions",
"type": "User",
"url": "https://api.github.com/users/matsuobasho"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-02-08T17:11:26Z" | "2024-02-21T00:34:41Z" | null | NONE | null | ### Describe the bug
```
Traceback (most recent call last):
File "finetune.py", line 103, in <module>
main(args)
File "finetune.py", line 45, in main
data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer,
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 868, in map
{
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 869, in <dictcomp>
k: dataset.map(
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3093, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3432, in _map_single
arrow_formatted_shard = shard.with_format("arrow")
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2667, in with_format
dataset = copy.deepcopy(self)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 153, in deepcopy
y = copier(memo)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/table.py", line 176, in __deepcopy__
memo[id(self._batches)] = list(self._batches)
AttributeError: 'InMemoryTable' object has no attribute '_batches'
```
### Steps to reproduce the bug
I'm running an MLOps flow using AzureML.
The error appears when I run the following function in my training script:
```python
data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer,
seq_length),
batched=True,
batch_size=batch_size,
remove_columns=['col1', 'col2'])
```
```python
def tokenize_function(tok, seq_length, example)
# Pad so that each batch has the same sequence length
inp = tok(example['col1'], padding=True, truncation=True)
outp = tok(example['col2'], padding="max_length", max_length=seq_length)
res = {
'input_ids': inp['input_ids'],
'attention_mask': inp['attention_mask'],
'decoder_input_ids': outp['input_ids'],
'labels': outp['input_ids'],
'decoder_attention_mask': outp['attention_mask']
}
return res
```
### Expected behavior
Processing proceeds without errors. I ran this same workflow 2 weeks ago without a problem. I recreated the environment since then but it doesn't appear that datasets versions have changed since Dec. '23.
### Environment info
datasets 2.16.1
transformers 4.35.2
pyarrow 15.0.0
pyarrow-hotfix 0.6
torch 2.0.1
I'm not using the latest transformers version because there was an error due to a conflict with Azure mlflow when I tried the last time. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6650/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6650/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6649 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6649/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6649/comments | https://api.github.com/repos/huggingface/datasets/issues/6649/events | https://github.com/huggingface/datasets/pull/6649 | 2,124,940,213 | PR_kwDODunzps5mXRo8 | 6,649 | Minor multi gpu doc improvement | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-02-08T11:17:24Z" | "2024-02-08T11:23:35Z" | "2024-02-08T11:17:35Z" | MEMBER | null | just added torch.no_grad and eval() | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6649/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6649/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6649.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6649",
"merged_at": "2024-02-08T11:17:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6649.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6649"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6648 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6648/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6648/comments | https://api.github.com/repos/huggingface/datasets/issues/6648/events | https://github.com/huggingface/datasets/pull/6648 | 2,124,813,589 | PR_kwDODunzps5mW1MA | 6,648 | Document usage of hfh cli instead of git | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-02-08T10:24:56Z" | "2024-02-08T13:57:41Z" | "2024-02-08T13:51:39Z" | MEMBER | null | (basically the same content as the hfh upload docs, but adapted for datasets) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6648/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6648/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6648.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6648",
"merged_at": "2024-02-08T13:51:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6648.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6648"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6647 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6647/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6647/comments | https://api.github.com/repos/huggingface/datasets/issues/6647/events | https://github.com/huggingface/datasets/pull/6647 | 2,123,397,569 | PR_kwDODunzps5mSB2B | 6,647 | Update loading.mdx to include "jsonl" file loading. | {
"avatar_url": "https://avatars.githubusercontent.com/u/22236370?v=4",
"events_url": "https://api.github.com/users/mosheber/events{/privacy}",
"followers_url": "https://api.github.com/users/mosheber/followers",
"following_url": "https://api.github.com/users/mosheber/following{/other_user}",
"gists_url": "https://api.github.com/users/mosheber/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mosheber",
"id": 22236370,
"login": "mosheber",
"node_id": "MDQ6VXNlcjIyMjM2Mzcw",
"organizations_url": "https://api.github.com/users/mosheber/orgs",
"received_events_url": "https://api.github.com/users/mosheber/received_events",
"repos_url": "https://api.github.com/users/mosheber/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mosheber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mosheber/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mosheber"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-02-07T16:18:08Z" | "2024-02-08T15:34:17Z" | null | NONE | null | * A small update to the documentation, noting the ability to load jsonl files. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6647/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6647/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6647.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6647",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6647.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6647"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6646 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6646/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6646/comments | https://api.github.com/repos/huggingface/datasets/issues/6646/events | https://github.com/huggingface/datasets/pull/6646 | 2,123,134,128 | PR_kwDODunzps5mRIma | 6,646 | Better multi-gpu example | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-02-07T14:15:01Z" | "2024-02-09T17:43:32Z" | "2024-02-07T14:59:11Z" | MEMBER | null | Use Qwen1.5-0.5B-Chat as an easy example for multi-GPU
the previous example was using a model for translation and the way it was setup was not really the right way to use the model. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6646/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6646/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6646.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6646",
"merged_at": "2024-02-07T14:59:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6646.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6646"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6645 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6645/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6645/comments | https://api.github.com/repos/huggingface/datasets/issues/6645/events | https://github.com/huggingface/datasets/issues/6645 | 2,122,956,818 | I_kwDODunzps5-icAS | 6,645 | Support fsspec 2024.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"",
""
] | "2024-02-07T12:45:29Z" | "2024-02-29T15:12:19Z" | "2024-02-29T15:12:19Z" | MEMBER | null | Support fsspec 2024.2.
First, we should address:
- #6644 | {
"+1": 8,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6645/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6645/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6644 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6644/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6644/comments | https://api.github.com/repos/huggingface/datasets/issues/6644/events | https://github.com/huggingface/datasets/issues/6644 | 2,122,955,282 | I_kwDODunzps5-iboS | 6,644 | Support fsspec 2023.12 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"",
""
] | "2024-02-07T12:44:39Z" | "2024-02-29T15:12:18Z" | "2024-02-29T15:12:18Z" | MEMBER | null | Support fsspec 2023.12 by handling previous and new glob behavior. | {
"+1": 6,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6644/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6644/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6643 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6643/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6643/comments | https://api.github.com/repos/huggingface/datasets/issues/6643/events | https://github.com/huggingface/datasets/issues/6643 | 2,121,239,039 | I_kwDODunzps5-b4n_ | 6,643 | Faiss GPU index cannot be serialised when passed to trainer | {
"avatar_url": "https://avatars.githubusercontent.com/u/56388976?v=4",
"events_url": "https://api.github.com/users/rubenweitzman/events{/privacy}",
"followers_url": "https://api.github.com/users/rubenweitzman/followers",
"following_url": "https://api.github.com/users/rubenweitzman/following{/other_user}",
"gists_url": "https://api.github.com/users/rubenweitzman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rubenweitzman",
"id": 56388976,
"login": "rubenweitzman",
"node_id": "MDQ6VXNlcjU2Mzg4OTc2",
"organizations_url": "https://api.github.com/users/rubenweitzman/orgs",
"received_events_url": "https://api.github.com/users/rubenweitzman/received_events",
"repos_url": "https://api.github.com/users/rubenweitzman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rubenweitzman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rubenweitzman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rubenweitzman"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-02-06T16:41:00Z" | "2024-02-15T10:29:32Z" | null | NONE | null | ### Describe the bug
I am working on a retrieval project and encountering I have encountered two issues in the hugging face faiss integration:
1. I am trying to pass in a dataset with a faiss index to the Huggingface trainer. The code works for a cpu faiss index, but doesn't for a gpu one, getting error:
```
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 1543, in train
return inner_training_loop(
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 1555, in _inner_training_loop
train_dataloader = self.get_train_dataloader()
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 831, in get_train_dataloader
train_dataset = self._remove_unused_columns(train_dataset, description="training")
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 725, in _remove_unused_columns
return dataset.remove_columns(ignored_columns)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/fingerprint.py", line 481, in wrapper
out = func(dataset, *args, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2146, in remove_columns
dataset = copy.deepcopy(self)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 271, in _reconstruct
state = deepcopy(state, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 271, in _reconstruct
state = deepcopy(state, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 161, in deepcopy
rv = reductor(4)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/__init__.py", line 556, in index_getstate
return {"this": serialize_index(self).tobytes()}
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/__init__.py", line 1607, in serialize_index
write_index(index, writer)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/swigfaiss.py", line 9843, in write_index
return _swigfaiss.write_index(*args)
RuntimeError: Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /project/faiss/faiss/impl/index_write.cpp:590: don't know how to serialize this type of index
```
The index was created with the add_faiss_index method
```
train_dataset.add_faiss_index(
column='embeddings',
index_name='embeddings',
string_factory=faiss_index_string,
train_size=config.faiss_train_size,
device=0, # Use -1 for CPU, or specify GPU device ID
faiss_verbose=True
)
```
2. Athough faiss is written to be compatible on the gpu for searching [https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU](https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU) I am getting error when trying to use the hugggingface code to do the search on gpu. This seems to be caused by this line https://github.com/huggingface/datasets/blob/f9975f636542df7f95c27065ea93147440d690b7/src/datasets/search.py#L376 producing error
```
total_scores, total_examples = self.dataset.get_nearest_examples_batch('embeddings', embeddings, k=self.k)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 773, in get_nearest_examples_batch
total_scores, total_indices = self.search_batch(index_name, queries, k, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 727, in search_batch
return self._indexes[index_name].search_batch(queries, k, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 376, in search_batch
if not queries.flags.c_contiguous:
AttributeError: 'Tensor' object has no attribute 'flags'
```
### Steps to reproduce the bug
```
train_dataset.add_faiss_index(
column='embeddings',
index_name='embeddings',
string_factory=faiss_index_string,
train_size=config.faiss_train_size,
device=0, # Use -1 for CPU, or specify GPU device ID
faiss_verbose=True
)
Trainer(
model=model,
args=args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=data_collator,
tokenizer=tokenizer
)
train_dataset.get_nearest_examples_batch('embeddings', embeddings, k=self.k)
```
### Expected behavior
I would expect the faiss database code to be gpu compatible
### Environment info
huggingface Version: 2.16.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6643/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6643/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6642 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6642/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6642/comments | https://api.github.com/repos/huggingface/datasets/issues/6642/events | https://github.com/huggingface/datasets/issues/6642 | 2,119,085,766 | I_kwDODunzps5-Tq7G | 6,642 | Differently dataset object saved than it is loaded. | {
"avatar_url": "https://avatars.githubusercontent.com/u/31218150?v=4",
"events_url": "https://api.github.com/users/MFajcik/events{/privacy}",
"followers_url": "https://api.github.com/users/MFajcik/followers",
"following_url": "https://api.github.com/users/MFajcik/following{/other_user}",
"gists_url": "https://api.github.com/users/MFajcik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MFajcik",
"id": 31218150,
"login": "MFajcik",
"node_id": "MDQ6VXNlcjMxMjE4MTUw",
"organizations_url": "https://api.github.com/users/MFajcik/orgs",
"received_events_url": "https://api.github.com/users/MFajcik/received_events",
"repos_url": "https://api.github.com/users/MFajcik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MFajcik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MFajcik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MFajcik"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-02-05T17:28:57Z" | "2024-02-06T09:50:19Z" | "2024-02-06T09:50:19Z" | NONE | null | ### Describe the bug
Differently sized object is saved than it is loaded.
### Steps to reproduce the bug
Hi, I save dataset in a following way:
```
dataset = load_dataset("json",
data_files={
"train": os.path.join(input_folder, f"{task_meta_type}_{task_type}_train.jsonl"),
"test": os.path.join(input_folder, f"{task_meta_type}_{task_type}_test.jsonl")})
print(os.path.join(output_folder, f"{task_meta_type}_{task_type}"))
print(f"Length of train dataset: {len(dataset['train'])}")
print(f"Length of test dataset: {len(dataset['test'])}")
dataset.save_to_disk(os.path.join(output_folder, f"{task_meta_type}_{task_type}"))
```
this yields output
```
.data/hf_dataset/propaganda_zanr
Length of train dataset: 7642
Length of test dataset: 1000
```
Everything looks fine.
Then I load the dataset
```python
from datasets import load_dataset
dataset_path = ".data/hf_dataset/propaganda_zanr"
dataset = load_dataset(dataset_path)
print(f"Length of train dataset: {len(dataset['train'])}")
print(f"Length of test dataset: {len(dataset['test'])}")
```
this prints
```
Generating train split: 1 examples [00:00, 72.10 examples/s]
Generating test split: 1 examples [00:00, 100.69 examples/s]
Length of train dataset: 1
Length of test dataset: 1
```
I dont' understand :(
### Expected behavior
same object is loaded
### Environment info
datasets==2.16.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6642/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6642/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6641 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6641/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6641/comments | https://api.github.com/repos/huggingface/datasets/issues/6641/events | https://github.com/huggingface/datasets/issues/6641 | 2,116,963,132 | I_kwDODunzps5-Lks8 | 6,641 | unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte | {
"avatar_url": "https://avatars.githubusercontent.com/u/109789057?v=4",
"events_url": "https://api.github.com/users/Hughhuh/events{/privacy}",
"followers_url": "https://api.github.com/users/Hughhuh/followers",
"following_url": "https://api.github.com/users/Hughhuh/following{/other_user}",
"gists_url": "https://api.github.com/users/Hughhuh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Hughhuh",
"id": 109789057,
"login": "Hughhuh",
"node_id": "U_kgDOBos_gQ",
"organizations_url": "https://api.github.com/users/Hughhuh/orgs",
"received_events_url": "https://api.github.com/users/Hughhuh/received_events",
"repos_url": "https://api.github.com/users/Hughhuh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Hughhuh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hughhuh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Hughhuh"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"",
""
] | "2024-02-04T08:49:31Z" | "2024-02-06T09:26:07Z" | "2024-02-06T09:11:45Z" | NONE | null | ### Describe the bug
unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte
### Steps to reproduce the bug
```
import sys
sys.getdefaultencoding()
'utf-8'
from datasets import load_dataset
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test dataset size: {len(dataset['test'])}")
Resolving data files: 100%
159/159 [00:00<00:00, 9909.28it/s]
Using custom data configuration samsum-0b1209637541c9e6
Downloading and preparing dataset json/samsum to C:/Users/Administrator/.cache/huggingface/datasets/json/samsum-0b1209637541c9e6/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Downloading data files: 100%
3/3 [00:00<00:00, 119.99it/s]
Extracting data files: 100%
3/3 [00:00<00:00, 9.54it/s]
Generating train split:
88392/0 [00:15<00:00, 86848.17 examples/s]
Generating test split:
0/0 [00:00<?, ? examples/s]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\packaged_modules\json\json.py:132, in Json._generate_tables(self, files)
131 try:
--> 132 pa_table = paj.read_json(
133 io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
134 )
135 break
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\_json.pyx:290, in pyarrow._json.read_json()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\error.pxi:100, in pyarrow.lib.check_status()
ArrowInvalid: JSON parse error: Invalid value. in row 0
During handling of the above exception, another exception occurred:
UnicodeDecodeError Traceback (most recent call last)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1819, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1818 _time = time.time()
-> 1819 for _, table in generator:
1820 if max_shard_size is not None and writer._num_bytes > max_shard_size:
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\packaged_modules\json\json.py:153, in Json._generate_tables(self, files)
152 with open(file, encoding="utf-8") as f:
--> 153 dataset = json.load(f)
154 except json.JSONDecodeError:
File ~\AppData\Local\Programs\Python\Python310\lib\json\__init__.py:293, in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
276 """Deserialize ``fp`` (a ``.read()``-supporting file-like object containing
277 a JSON document) to a Python object.
278
(...)
291 kwarg; otherwise ``JSONDecoder`` is used.
292 """
--> 293 return loads(fp.read(),
294 cls=cls, object_hook=object_hook,
295 parse_float=parse_float, parse_int=parse_int,
296 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File ~\AppData\Local\Programs\Python\Python310\lib\codecs.py:322, in BufferedIncrementalDecoder.decode(self, input, final)
321 data = self.buffer + input
--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final)
323 # keep undecoded input until the next call
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Cell In[81], line 5
1 from datasets import load_dataset
3 # Load dataset from the hub
4 #dataset = load_dataset("json",data_files="C:/Users/Administrator/Desktop/samsum/samsum/data/corpus/train.json",field="data")
----> 5 dataset = load_dataset('json',"samsum")
6 #dataset = load_dataset("samsum")
7 print(f"Train dataset size: {len(dataset['train'])}")
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py:1758, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs)
1755 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1757 # Download and prepare data
-> 1758 builder_instance.download_and_prepare(
1759 download_config=download_config,
1760 download_mode=download_mode,
1761 ignore_verifications=ignore_verifications,
1762 try_from_hf_gcs=try_from_hf_gcs,
1763 num_proc=num_proc,
1764 )
1766 # Build dataset for splits
1767 keep_in_memory = (
1768 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1769 )
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:860, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
858 if num_proc is not None:
859 prepare_split_kwargs["num_proc"] = num_proc
--> 860 self._download_and_prepare(
861 dl_manager=dl_manager,
862 verify_infos=verify_infos,
863 **prepare_split_kwargs,
864 **download_and_prepare_kwargs,
865 )
866 # Sync info
867 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:953, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
949 split_dict.add(split_generator.split_info)
951 try:
952 # Prepare split will record examples associated to the split
--> 953 self._prepare_split(split_generator, **prepare_split_kwargs)
954 except OSError as e:
955 raise OSError(
956 "Cannot find data file. "
957 + (self.manual_download_instructions or "")
958 + "\nOriginal error:\n"
959 + str(e)
960 ) from None
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1708, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1706 gen_kwargs = split_generator.gen_kwargs
1707 job_id = 0
-> 1708 for job_id, done, content in self._prepare_split_single(
1709 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1710 ):
1711 if done:
1712 result = content
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1851, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1849 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1850 e = e.__context__
-> 1851 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1853 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
can't load dataset
### Environment info
dataset:samsum
system :win10
gpu:m40 24G | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6641/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6641/timeline | null | not_planned | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6640 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6640/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6640/comments | https://api.github.com/repos/huggingface/datasets/issues/6640/events | https://github.com/huggingface/datasets/issues/6640 | 2,115,864,531 | I_kwDODunzps5-HYfT | 6,640 | Sign Language Support | {
"avatar_url": "https://avatars.githubusercontent.com/u/6684795?v=4",
"events_url": "https://api.github.com/users/Merterm/events{/privacy}",
"followers_url": "https://api.github.com/users/Merterm/followers",
"following_url": "https://api.github.com/users/Merterm/following{/other_user}",
"gists_url": "https://api.github.com/users/Merterm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Merterm",
"id": 6684795,
"login": "Merterm",
"node_id": "MDQ6VXNlcjY2ODQ3OTU=",
"organizations_url": "https://api.github.com/users/Merterm/orgs",
"received_events_url": "https://api.github.com/users/Merterm/received_events",
"repos_url": "https://api.github.com/users/Merterm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Merterm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Merterm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Merterm"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"",
""
] | "2024-02-02T21:54:51Z" | "2024-02-02T21:54:51Z" | null | NONE | null | ### Feature request
Currently, there are only several Sign Language labels, I would like to propose adding all the Signed Languages as new labels which are described in this ISO standard: https://www.evertype.com/standards/iso639/sign-language.html
### Motivation
Datasets currently only have labels for several signed languages. There are more signed languages in the world. Furthermore, some signed languages that have a lot of online data cannot be found because of this reason (for instance, German Sign Language, and there is no German Sign Language label on huggingface datasets even though there are a lot of readily available sign language datasets exist for German Sign Language, which are used very frequently in Sign Language Processing papers, and models.)
### Your contribution
I can submit a PR for this as well, adding the ISO codes and languages to the labels in datasets. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6640/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6640/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6639 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6639/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6639/comments | https://api.github.com/repos/huggingface/datasets/issues/6639/events | https://github.com/huggingface/datasets/pull/6639 | 2,114,620,200 | PR_kwDODunzps5l0KPG | 6,639 | Run download_and_prepare if missing splits | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-02-02T10:36:49Z" | "2024-02-06T16:54:22Z" | null | MEMBER | null | A first step towards https://github.com/huggingface/datasets/issues/6529 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6639/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6639/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6639.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6639",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6639.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6639"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6638 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6638/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6638/comments | https://api.github.com/repos/huggingface/datasets/issues/6638/events | https://github.com/huggingface/datasets/issues/6638 | 2,113,329,257 | I_kwDODunzps599thp | 6,638 | Cannot download wmt16 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/81709031?v=4",
"events_url": "https://api.github.com/users/vidyasiv/events{/privacy}",
"followers_url": "https://api.github.com/users/vidyasiv/followers",
"following_url": "https://api.github.com/users/vidyasiv/following{/other_user}",
"gists_url": "https://api.github.com/users/vidyasiv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vidyasiv",
"id": 81709031,
"login": "vidyasiv",
"node_id": "MDQ6VXNlcjgxNzA5MDMx",
"organizations_url": "https://api.github.com/users/vidyasiv/orgs",
"received_events_url": "https://api.github.com/users/vidyasiv/received_events",
"repos_url": "https://api.github.com/users/vidyasiv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vidyasiv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vidyasiv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vidyasiv"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-02-01T19:41:42Z" | "2024-02-01T20:07:29Z" | "2024-02-01T20:07:29Z" | NONE | null | ### Describe the bug
As of this morning (PST) 2/1/2024, seeing the wmt16 dataset is missing from opus , could you suggest an alternative?
```
Downloading data files: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last):
File "test.py", line 2, in <module>
raw_datasets = load_dataset("wmt16","ro-en",split="train")
File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 2153, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 954, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1717, in _download_and_prepare
super()._download_and_prepare(
File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1027, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/746749a11d25c02058042da7502d973ff410e73457f3d305fc1177dc0e8c4227/wmt_utils.py", line 754, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py", line 565, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py", line 428, in download
downloaded_path_or_paths = map_nested(
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 464, in map_nested
mapped = [
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 465, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 384, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 384, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 367, in _single_map_nested
return function(data_struct)
File "/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py", line 454, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py", line 182, in cached_path
output_path = get_from_cache(
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py", line 596, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
```
### Steps to reproduce the bug
```
from datasets import load_dataset
raw_datasets = load_dataset("wmt16","ro-en",split="train")
```
### Expected behavior
Expect the dataset to be downloaded/ at least a clean exit with error explaining dataset is missing and a suggestion for next steps
### Environment info
- `datasets` version: 2.14.7
- Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.17.3
- PyArrow version: 15.0.0
- Pandas version: 2.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6638/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6638/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6637 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6637/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6637/comments | https://api.github.com/repos/huggingface/datasets/issues/6637/events | https://github.com/huggingface/datasets/issues/6637 | 2,113,025,975 | I_kwDODunzps598je3 | 6,637 | 'with_format' is extremely slow when used together with 'interleave_datasets' or 'shuffle' on IterableDatasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/22883190?v=4",
"events_url": "https://api.github.com/users/tobycrisford/events{/privacy}",
"followers_url": "https://api.github.com/users/tobycrisford/followers",
"following_url": "https://api.github.com/users/tobycrisford/following{/other_user}",
"gists_url": "https://api.github.com/users/tobycrisford/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tobycrisford",
"id": 22883190,
"login": "tobycrisford",
"node_id": "MDQ6VXNlcjIyODgzMTkw",
"organizations_url": "https://api.github.com/users/tobycrisford/orgs",
"received_events_url": "https://api.github.com/users/tobycrisford/received_events",
"repos_url": "https://api.github.com/users/tobycrisford/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tobycrisford/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tobycrisford/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tobycrisford"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-02-01T17:16:54Z" | "2024-02-05T10:43:47Z" | null | NONE | null | ### Describe the bug
If you:
1. Interleave two iterable datasets together with the interleave_datasets function, or shuffle an iterable dataset
2. Set the output format to torch tensors with .with_format('torch')
Then iterating through the dataset becomes over 100x slower than it is if you don't apply the torch formatting.
### Steps to reproduce the bug
```python
import datasets
import torch
from tqdm import tqdm
rand_a = torch.randn(3,224,224)
rand_b = torch.randn(3,224,224)
a = torch.stack([rand_a] * 1000)
b = torch.stack([rand_b] * 1000)
features = datasets.Features({"tensor": datasets.Array3D(shape=(3,224,224), dtype="float32")})
ds_a = datasets.Dataset.from_dict({"tensor": a}, features=features).to_iterable_dataset()
ds_b = datasets.Dataset.from_dict({"tensor": b}, features=features).to_iterable_dataset()
# Iterating through either dataset with torch formatting is really fast (2000it/s on my machine)
for example in tqdm(ds_a.with_format('torch')):
pass
# Iterating through either dataset shuffled is also pretty fast (100it/s on my machine)
for example in tqdm(ds_a.shuffle()):
pass
# Iterating through this interleaved dataset is pretty fast (200it/s on my machine)
ds_fast = datasets.interleave_datasets([ds_a, ds_b])
for example in tqdm(ds_fast):
pass
# Iterating through either dataset with torch formatting *after shuffling* is really slow... (<2it/s on my machine)
for example in tqdm(ds_a.shuffle().with_format('torch')):
pass
# Iterating through this torch formatted interleaved dataset is also really slow (<2it/s on my machine)...
ds_slow = datasets.interleave_datasets([ds_a, ds_b]).with_format('torch')
for example in tqdm(ds_slow):
pass
# Even doing this is way faster!! (70it/s on my machine)
for example in tqdm(ds_fast):
test = torch.tensor(example['tensor'])
```
### Expected behavior
Applying torch formatting to the interleaved dataset shouldn't increase the time taken to iterate through the dataset by very much, since even explicitly converting every example is over 70x faster than calling .with_format('torch').
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.38
- Python version: 3.11.6
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6637/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6637/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6636 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6636/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6636/comments | https://api.github.com/repos/huggingface/datasets/issues/6636/events | https://github.com/huggingface/datasets/pull/6636 | 2,110,781,097 | PR_kwDODunzps5lm4zI | 6,636 | Faster column validation and reordering | {
"avatar_url": "https://avatars.githubusercontent.com/u/11325244?v=4",
"events_url": "https://api.github.com/users/psmyth94/events{/privacy}",
"followers_url": "https://api.github.com/users/psmyth94/followers",
"following_url": "https://api.github.com/users/psmyth94/following{/other_user}",
"gists_url": "https://api.github.com/users/psmyth94/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/psmyth94",
"id": 11325244,
"login": "psmyth94",
"node_id": "MDQ6VXNlcjExMzI1MjQ0",
"organizations_url": "https://api.github.com/users/psmyth94/orgs",
"received_events_url": "https://api.github.com/users/psmyth94/received_events",
"repos_url": "https://api.github.com/users/psmyth94/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/psmyth94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psmyth94/subscriptions",
"type": "User",
"url": "https://api.github.com/users/psmyth94"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-31T19:08:28Z" | "2024-02-07T19:39:00Z" | "2024-02-06T23:03:38Z" | CONTRIBUTOR | null | I work with bioinformatics data and often these tables have thousands and even tens of thousands of features. These tables are also accompanied by metadata that I do not want to pass in the model. When I perform `set_format('pt', columns=large_column_list)` , it can take several minutes before it finishes. The culprit is when the following check is performed: `any(col not in self._data.column_names for col in columns)`. Replacing this by `set(columns) - (self._data.column_names)` is more efficient. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6636/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6636/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6636.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6636",
"merged_at": "2024-02-06T23:03:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6636.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6636"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6635 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6635/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6635/comments | https://api.github.com/repos/huggingface/datasets/issues/6635/events | https://github.com/huggingface/datasets/pull/6635 | 2,110,659,519 | PR_kwDODunzps5lmeNO | 6,635 | Fix missing info when loading some datasets from Parquet export | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-31T17:55:21Z" | "2024-02-07T16:48:55Z" | "2024-02-07T16:41:04Z" | MEMBER | null | Fix getting the info for script-based datasets with Parquet export with a single config not named "default".
E.g.
```python
from datasets import load_dataset_builder
b = load_dataset_builder("bookcorpus")
print(b.info.features)
# should print {'text': Value(dtype='string', id=None)}
```
I fixed this by setting the default config name when there is only one config. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6635/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6635/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6635.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6635",
"merged_at": "2024-02-07T16:41:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6635.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6635"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6634 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6634/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6634/comments | https://api.github.com/repos/huggingface/datasets/issues/6634/events | https://github.com/huggingface/datasets/pull/6634 | 2,110,242,376 | PR_kwDODunzps5llB9a | 6,634 | Support data_dir parameter in push_to_hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-31T14:37:36Z" | "2024-02-05T10:32:49Z" | "2024-02-05T10:26:40Z" | MEMBER | null | Support `data_dir` parameter in `push_to_hub`.
This allows users to organize the data files according to their specific needs. For example, "wikimedia/wikipedia" files could be organized by year and/or date, e.g. "2024/20240101/20240101.en". | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6634/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6634/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6634.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6634",
"merged_at": "2024-02-05T10:26:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6634.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6634"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6633 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6633/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6633/comments | https://api.github.com/repos/huggingface/datasets/issues/6633/events | https://github.com/huggingface/datasets/pull/6633 | 2,110,124,475 | PR_kwDODunzps5lknz9 | 6,633 | dataset viewer requires no-script | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-31T13:41:54Z" | "2024-01-31T14:05:04Z" | "2024-01-31T13:59:01Z" | CONTRIBUTOR | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6633/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6633/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6633.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6633",
"merged_at": "2024-01-31T13:59:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6633.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6633"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6632 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6632/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6632/comments | https://api.github.com/repos/huggingface/datasets/issues/6632/events | https://github.com/huggingface/datasets/pull/6632 | 2,108,541,678 | PR_kwDODunzps5lfPuk | 6,632 | Fix reload cache with data dir | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-30T18:52:23Z" | "2024-02-06T17:27:35Z" | "2024-02-06T17:21:24Z" | MEMBER | null | The cache used to only check for the latest cache directory with a given config_name, but it was wrong (e.g. `default-data_dir=data%2Ffortran-data_dir=data%2Ffortran` instead of `default-data_dir=data%2Ffortran`)
I fixed this by not passing the `config_kwargs` to the parent Builder `__init__`, and passing the config_id forged from the `config_kwargs` directly
close https://github.com/huggingface/datasets/issues/6609 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6632/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6632/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6632.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6632",
"merged_at": "2024-02-06T17:21:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6632.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6632"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6631 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6631/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6631/comments | https://api.github.com/repos/huggingface/datasets/issues/6631/events | https://github.com/huggingface/datasets/pull/6631 | 2,107,802,473 | PR_kwDODunzps5lcu9A | 6,631 | Fix filelock: use current umask for filelock >= 3.10 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-30T12:56:01Z" | "2024-01-30T15:34:49Z" | "2024-01-30T15:28:37Z" | MEMBER | null | reported in https://github.com/huggingface/evaluate/issues/542
cc @stas00 @williamberrios
close https://github.com/huggingface/datasets/issues/6589 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6631/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6631/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6631.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6631",
"merged_at": "2024-01-30T15:28:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6631.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6631"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6630 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6630/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6630/comments | https://api.github.com/repos/huggingface/datasets/issues/6630/events | https://github.com/huggingface/datasets/pull/6630 | 2,106,478,275 | PR_kwDODunzps5lYPi3 | 6,630 | Bump max range of dill to 0.3.8 | {
"avatar_url": "https://avatars.githubusercontent.com/u/27844407?v=4",
"events_url": "https://api.github.com/users/ringohoffman/events{/privacy}",
"followers_url": "https://api.github.com/users/ringohoffman/followers",
"following_url": "https://api.github.com/users/ringohoffman/following{/other_user}",
"gists_url": "https://api.github.com/users/ringohoffman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ringohoffman",
"id": 27844407,
"login": "ringohoffman",
"node_id": "MDQ6VXNlcjI3ODQ0NDA3",
"organizations_url": "https://api.github.com/users/ringohoffman/orgs",
"received_events_url": "https://api.github.com/users/ringohoffman/received_events",
"repos_url": "https://api.github.com/users/ringohoffman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ringohoffman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ringohoffman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ringohoffman"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-29T21:35:55Z" | "2024-01-30T16:19:45Z" | "2024-01-30T15:12:25Z" | CONTRIBUTOR | null | Release on Jan 27, 2024: https://pypi.org/project/dill/0.3.8/#history
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6630/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6630/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6630.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6630",
"merged_at": "2024-01-30T15:12:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6630.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6630"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6629 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6629/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6629/comments | https://api.github.com/repos/huggingface/datasets/issues/6629/events | https://github.com/huggingface/datasets/pull/6629 | 2,105,774,482 | PR_kwDODunzps5lV0aF | 6,629 | Support push_to_hub without org/user to default to logged-in user | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-29T15:36:52Z" | "2024-02-05T12:35:43Z" | "2024-02-05T12:29:36Z" | MEMBER | null | This behavior is aligned with:
- the behavior of `datasets` before merging #6519
- the behavior described in the corresponding docstring
- the behavior of `huggingface_hub.create_repo`
Revert "Support push_to_hub canonical datasets (#6519)"
- This reverts commit a887ee78835573f5d80f9e414e8443b4caff3541.
Fix #6597. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6629/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6629/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6629.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6629",
"merged_at": "2024-02-05T12:29:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6629.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6629"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6628 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6628/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6628/comments | https://api.github.com/repos/huggingface/datasets/issues/6628/events | https://github.com/huggingface/datasets/pull/6628 | 2,105,760,502 | PR_kwDODunzps5lVxXU | 6,628 | Make CLI test support multi-processing | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-29T15:30:09Z" | "2024-02-05T10:29:20Z" | "2024-02-05T10:23:13Z" | MEMBER | null | Support passing `--num_proc` to CLI test.
This was really useful recently to run the command on `pubmed`: https://huggingface.co/datasets/pubmed/discussions/11 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6628/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6628/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6628.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6628",
"merged_at": "2024-02-05T10:23:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6628.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6628"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6627 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6627/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6627/comments | https://api.github.com/repos/huggingface/datasets/issues/6627/events | https://github.com/huggingface/datasets/pull/6627 | 2,105,735,816 | PR_kwDODunzps5lVr-t | 6,627 | Disable `tqdm` bars in non-interactive environments | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-29T15:18:21Z" | "2024-01-29T15:47:34Z" | "2024-01-29T15:41:32Z" | CONTRIBUTOR | null | Replace `disable=False` with `disable=None` in the `tqdm` bars to disable them in non-interactive environments (by default).
For more info, see a [similar PR](https://github.com/huggingface/huggingface_hub/pull/2000) in `huggingface_hub`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6627/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6627/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6627.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6627",
"merged_at": "2024-01-29T15:41:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6627.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6627"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6626 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6626/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6626/comments | https://api.github.com/repos/huggingface/datasets/issues/6626/events | https://github.com/huggingface/datasets/pull/6626 | 2,105,482,522 | PR_kwDODunzps5lU0I2 | 6,626 | Raise error on bad split name | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-29T13:17:41Z" | "2024-01-29T15:18:25Z" | "2024-01-29T15:12:18Z" | MEMBER | null | e.g. dashes '-' are not allowed in split names
This should add an error message on datasets with unsupported split names like https://huggingface.co/datasets/open-source-metrics/test
cc @AndreaFrancis | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6626/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6626/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6626.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6626",
"merged_at": "2024-01-29T15:12:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6626.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6626"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6624 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6624/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6624/comments | https://api.github.com/repos/huggingface/datasets/issues/6624/events | https://github.com/huggingface/datasets/issues/6624 | 2,103,950,718 | I_kwDODunzps59Z71- | 6,624 | How to download the laion-coco dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/15981416?v=4",
"events_url": "https://api.github.com/users/vanpersie32/events{/privacy}",
"followers_url": "https://api.github.com/users/vanpersie32/followers",
"following_url": "https://api.github.com/users/vanpersie32/following{/other_user}",
"gists_url": "https://api.github.com/users/vanpersie32/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vanpersie32",
"id": 15981416,
"login": "vanpersie32",
"node_id": "MDQ6VXNlcjE1OTgxNDE2",
"organizations_url": "https://api.github.com/users/vanpersie32/orgs",
"received_events_url": "https://api.github.com/users/vanpersie32/received_events",
"repos_url": "https://api.github.com/users/vanpersie32/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vanpersie32/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vanpersie32/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vanpersie32"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-28T03:56:05Z" | "2024-02-06T09:43:31Z" | "2024-02-06T09:43:31Z" | NONE | null | The laion coco dataset is not available now. How to download it
https://huggingface.co/datasets/laion/laion-coco | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6624/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6624/timeline | null | not_planned | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6623 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6623/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6623/comments | https://api.github.com/repos/huggingface/datasets/issues/6623/events | https://github.com/huggingface/datasets/issues/6623 | 2,103,870,123 | I_kwDODunzps59ZoKr | 6,623 | streaming datasets doesn't work properly with multi-node | {
"avatar_url": "https://avatars.githubusercontent.com/u/30778939?v=4",
"events_url": "https://api.github.com/users/rohitgr7/events{/privacy}",
"followers_url": "https://api.github.com/users/rohitgr7/followers",
"following_url": "https://api.github.com/users/rohitgr7/following{/other_user}",
"gists_url": "https://api.github.com/users/rohitgr7/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rohitgr7",
"id": 30778939,
"login": "rohitgr7",
"node_id": "MDQ6VXNlcjMwNzc4OTM5",
"organizations_url": "https://api.github.com/users/rohitgr7/orgs",
"received_events_url": "https://api.github.com/users/rohitgr7/received_events",
"repos_url": "https://api.github.com/users/rohitgr7/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rohitgr7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rohitgr7/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rohitgr7"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"",
""
] | "2024-01-27T23:46:13Z" | "2024-03-08T14:27:08Z" | null | NONE | null | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitted, I don’t have to use `DistributedSampler` (also they don't work with iterable datasets anyway)?
But in this case I noticed that the:
First iteraton:
first GPU will get → [1, 2]
first GPU will get → [3, 4]
Second iteraton:
first GPU will get → [5]
first GPU will get → Nothing
which actually creates an issue since in case of `DistributedSampler`, the samples are repeated internally to ensure non of the GPUs at any iteration is missing any data for gradient sync.
So my questions are:
1. Here since splitting is happening before hand, how to make sure each GPU get’s a batch at each iteration to avoid gradient sync issues?
2. Do we need to use `DistributedSampler`? If yes, how?
3. in the docstrings of `split_dataset_by_node`, this is mentioned: *"If the dataset has a number of shards that is a factor of `world_size` (i.e. if `dataset.n_shards % world_size == 0`), then the shards are evenly assigned across the nodes, which is the most optimized. Otherwise, each node keeps 1 example out of `world_size`, skipping the other examples."* Can you explain the last part here?
4. If `dataset.n_shards % world_size != 0`, is it possible to shard the streaming dataset on the fly to avoid the case where data is missing?
### Motivation
Somehow streaming datasets should work with DDP since for big LLMs a lot of data is required and DDP/multi-node is mostly used to train such models and streaming can actually help solve the data part of it.
### Your contribution
Yes, I can help in submitting the PR once we get mutual understanding on how it should behave. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6623/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6623/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6622 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6622/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6622/comments | https://api.github.com/repos/huggingface/datasets/issues/6622/events | https://github.com/huggingface/datasets/issues/6622 | 2,103,780,697 | I_kwDODunzps59ZSVZ | 6,622 | multi-GPU map does not work | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-27T20:06:08Z" | "2024-02-08T11:18:21Z" | "2024-02-08T11:18:21Z" | NONE | null | ### Describe the bug
Here is the code for single-GPU processing: https://pastebin.com/bfmEeK2y
Here is the code for multi-GPU processing: https://pastebin.com/gQ7i5AQy
Here is the video showing that the multi-GPU mapping does not work as expected (there are so many things wrong here, it's better to watch the 3-minute video than explain here):
https://youtu.be/RNbdPkSppc4
### Steps to reproduce the bug
-
### Expected behavior
-
### Environment info
x2 RTX A4000 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6622/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6622/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6621 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6621/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6621/comments | https://api.github.com/repos/huggingface/datasets/issues/6621/events | https://github.com/huggingface/datasets/issues/6621 | 2,103,675,294 | I_kwDODunzps59Y4me | 6,621 | deleted | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-27T16:59:58Z" | "2024-01-27T17:14:43Z" | "2024-01-27T17:14:43Z" | NONE | null | ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6621/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6621/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6620 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6620/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6620/comments | https://api.github.com/repos/huggingface/datasets/issues/6620/events | https://github.com/huggingface/datasets/issues/6620 | 2,103,110,536 | I_kwDODunzps59WuuI | 6,620 | wiki_dpr.py error (ID mismatch between lines {id} and vector {vec_id} | {
"avatar_url": "https://avatars.githubusercontent.com/u/101498700?v=4",
"events_url": "https://api.github.com/users/kiehls90/events{/privacy}",
"followers_url": "https://api.github.com/users/kiehls90/followers",
"following_url": "https://api.github.com/users/kiehls90/following{/other_user}",
"gists_url": "https://api.github.com/users/kiehls90/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kiehls90",
"id": 101498700,
"login": "kiehls90",
"node_id": "U_kgDOBgy_TA",
"organizations_url": "https://api.github.com/users/kiehls90/orgs",
"received_events_url": "https://api.github.com/users/kiehls90/received_events",
"repos_url": "https://api.github.com/users/kiehls90/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kiehls90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiehls90/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kiehls90"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"",
""
] | "2024-01-27T01:00:09Z" | "2024-02-06T09:40:19Z" | "2024-02-06T09:40:19Z" | NONE | null | ### Describe the bug
I'm trying to run a rag example, and the dataset is wiki_dpr.
wiki_dpr download and extracting have been completed successfully.
However, at the generating train split stage, an error from wiki_dpr.py keeps popping up.
Especially in "_generate_examples" :
1. The following error occurs in the line **id, text, title = line.strip().split("\t")**
ValueError: not enough values to unpack (expected 3, got 2)
-> This part handles exceptions so that even if an error occurs, it passes.
2. **ID mismatch between lines {id} and vector {vec_id}**
This error seems to occur at the line " assert int(id) == int(vec_id),".
After I handled the exception in the split error, generating train split progressed to 80%, but an id mismatch error occurred at about the 16200000th vector id.
Debugging is even more difficult because it takes a long time to download and split wiki_dpr. I need help. thank you in advance!!
### Steps to reproduce the bug
Occurs in the generating train split step when running the rag example in the transformers repository.
Specifically, it is an error in wiki_dpr.py.
### Expected behavior
.
### Environment info
python 3.8 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6620/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6620/timeline | null | not_planned | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6619 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6619/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6619/comments | https://api.github.com/repos/huggingface/datasets/issues/6619/events | https://github.com/huggingface/datasets/pull/6619 | 2,102,407,478 | PR_kwDODunzps5lK2VY | 6,619 | Migrate from `setup.cfg` to `pyproject.toml` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-26T15:27:10Z" | "2024-01-26T15:53:40Z" | "2024-01-26T15:47:32Z" | CONTRIBUTOR | null | Based on https://github.com/huggingface/huggingface_hub/pull/1971 in `hfh` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6619/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6619/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6619.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6619",
"merged_at": "2024-01-26T15:47:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6619.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6619"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6618 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6618/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6618/comments | https://api.github.com/repos/huggingface/datasets/issues/6618/events | https://github.com/huggingface/datasets/issues/6618 | 2,101,868,198 | I_kwDODunzps59R_am | 6,618 | While importing load_dataset from datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/77973415?v=4",
"events_url": "https://api.github.com/users/Era-cell/events{/privacy}",
"followers_url": "https://api.github.com/users/Era-cell/followers",
"following_url": "https://api.github.com/users/Era-cell/following{/other_user}",
"gists_url": "https://api.github.com/users/Era-cell/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Era-cell",
"id": 77973415,
"login": "Era-cell",
"node_id": "MDQ6VXNlcjc3OTczNDE1",
"organizations_url": "https://api.github.com/users/Era-cell/orgs",
"received_events_url": "https://api.github.com/users/Era-cell/received_events",
"repos_url": "https://api.github.com/users/Era-cell/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Era-cell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Era-cell/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Era-cell"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-26T09:21:57Z" | "2024-02-06T10:57:01Z" | "2024-02-06T09:25:54Z" | NONE | null | ### Describe the bug
cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' this is the error i received
### Steps to reproduce the bug
from datasets import load_dataset
### Expected behavior
No errors
### Environment info
python 3.11.5 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6618/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6618/timeline | null | not_planned | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6617 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6617/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6617/comments | https://api.github.com/repos/huggingface/datasets/issues/6617/events | https://github.com/huggingface/datasets/pull/6617 | 2,100,459,449 | PR_kwDODunzps5lEagV | 6,617 | Fix CI: pyarrow 15, pandas 2.2 and sqlachemy | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-25T13:57:41Z" | "2024-01-26T14:56:46Z" | "2024-01-26T14:50:44Z" | MEMBER | null | this should fix the CI failures on `main`
close https://github.com/huggingface/datasets/issues/5477 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6617/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6617/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6617.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6617",
"merged_at": "2024-01-26T14:50:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6617.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6617"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6616 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6616/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6616/comments | https://api.github.com/repos/huggingface/datasets/issues/6616/events | https://github.com/huggingface/datasets/pull/6616 | 2,100,125,709 | PR_kwDODunzps5lDSEL | 6,616 | Use schema metadata only if it matches features | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-25T11:01:14Z" | "2024-01-26T16:25:24Z" | "2024-01-26T16:19:12Z" | MEMBER | null | e.g. if we use `map` in arrow format and transform the table, the returned table might have new columns but the metadata might be wrong | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6616/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6616/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6616.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6616",
"merged_at": "2024-01-26T16:19:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6616.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6616"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6615 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6615/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6615/comments | https://api.github.com/repos/huggingface/datasets/issues/6615/events | https://github.com/huggingface/datasets/issues/6615 | 2,098,951,409 | I_kwDODunzps59G3Tx | 6,615 | ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/22179777?v=4",
"events_url": "https://api.github.com/users/ftkeys/events{/privacy}",
"followers_url": "https://api.github.com/users/ftkeys/followers",
"following_url": "https://api.github.com/users/ftkeys/following{/other_user}",
"gists_url": "https://api.github.com/users/ftkeys/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ftkeys",
"id": 22179777,
"login": "ftkeys",
"node_id": "MDQ6VXNlcjIyMTc5Nzc3",
"organizations_url": "https://api.github.com/users/ftkeys/orgs",
"received_events_url": "https://api.github.com/users/ftkeys/received_events",
"repos_url": "https://api.github.com/users/ftkeys/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ftkeys/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ftkeys/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ftkeys"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-24T19:37:03Z" | "2024-01-24T19:42:30Z" | "2024-01-24T19:40:11Z" | NONE | null | ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6615/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6615/timeline | null | not_planned | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6614 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6614/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6614/comments | https://api.github.com/repos/huggingface/datasets/issues/6614/events | https://github.com/huggingface/datasets/issues/6614 | 2,098,884,520 | I_kwDODunzps59Gm-o | 6,614 | `datasets/downloads` cleanup tool | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"",
""
] | "2024-01-24T18:52:10Z" | "2024-01-24T18:55:09Z" | null | CONTRIBUTOR | null | ### Feature request
Splitting off https://github.com/huggingface/huggingface_hub/issues/1997 - currently `huggingface-cli delete-cache` doesn't take care of cleaning `datasets` temp files
e.g. I discovered having millions of files under `datasets/downloads` cache, I had to do:
```
sudo find /data/huggingface/datasets/downloads -type f -mtime +3 -exec rm {} \+
sudo find /data/huggingface/datasets/downloads -type d -empty -delete
```
could the cleanup be integrated into `huggingface-cli` or a different tool provided to keep the folders tidy and not consume inodes and space
e.g. there were tens of thousands of `.lock` files - I don't know why they never get removed - lock files should be temporary for the duration of the operation requiring the lock and not remain after the operation finished, IMHO.
Also I think one should be able to nuke `datasets/downloads` w/o hurting the cache, but I think there are some datasets that rely on files extracted under this dir - or at least they did in the past - which is very difficult to manage since one has no idea what is safe to delete and what not.
Thank you
@Wauplin (requested to be tagged) | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6614/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6614/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6612 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6612/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6612/comments | https://api.github.com/repos/huggingface/datasets/issues/6612/events | https://github.com/huggingface/datasets/issues/6612 | 2,098,078,210 | I_kwDODunzps59DiIC | 6,612 | cnn_dailymail repeats itself | {
"avatar_url": "https://avatars.githubusercontent.com/u/8274752?v=4",
"events_url": "https://api.github.com/users/KeremZaman/events{/privacy}",
"followers_url": "https://api.github.com/users/KeremZaman/followers",
"following_url": "https://api.github.com/users/KeremZaman/following{/other_user}",
"gists_url": "https://api.github.com/users/KeremZaman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KeremZaman",
"id": 8274752,
"login": "KeremZaman",
"node_id": "MDQ6VXNlcjgyNzQ3NTI=",
"organizations_url": "https://api.github.com/users/KeremZaman/orgs",
"received_events_url": "https://api.github.com/users/KeremZaman/received_events",
"repos_url": "https://api.github.com/users/KeremZaman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KeremZaman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KeremZaman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KeremZaman"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-24T11:38:25Z" | "2024-02-01T08:14:50Z" | "2024-02-01T08:14:50Z" | NONE | null | ### Describe the bug
When I try to load `cnn_dailymail` dataset, it takes longer than usual and when I checked the dataset it's 3x bigger than it's supposed to be.
Check https://huggingface.co/datasets/cnn_dailymail: it says 287k rows for train. But when I check length of train split it says 861339.
Also I checked data:
```
>>> ds['train']['highlights'][0]
"Harry Potter star Daniel Radcliffe gets £20M fortune as he turns 18 Monday . Young actor says he has no plans to fritter his cash away . Radcliffe's earnings from first five Potter films have been held in trust fund ."````
>>> ds['train']['highlights'][0]
"Harry Potter star Daniel Radcliffe gets £20M fortune as he turns 18 Monday . Young actor says he has no plans to fritter his cash away . Radcliffe's earnings from first five Potter films have been held in trust fund ."````
>>> ds['train']['highlights'][287113]
"Harry Potter star Daniel Radcliffe gets £20M fortune as he turns 18 Monday .\nYoung actor says he has no plans to fritter his cash away .\nRadcliffe's earnings from first five Potter films have been held in trust fund ."````
>>> ds['train']['highlights'][574226]
"Harry Potter star Daniel Radcliffe gets £20M fortune as he turns 18 Monday .\nYoung actor says he has no plans to fritter his cash away .\nRadcliffe's earnings from first five Potter films have been held in trust fund ."
```
The datasets seems to be updated 6 days ago to convert it to Parquet. Probably, there is some issue with backward compatability.
### Steps to reproduce the bug
1.
```
from datasets import load_dataset
ds = load_dataset('cnn_dailymail', '3.0.0')
len(ds['train'])
```
### Expected behavior
It should not repeat itself.
### Environment info
datasets==2.13.2
Python==3.7.13 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6612/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6612/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6611 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6611/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6611/comments | https://api.github.com/repos/huggingface/datasets/issues/6611/events | https://github.com/huggingface/datasets/issues/6611 | 2,096,004,858 | I_kwDODunzps587n76 | 6,611 | `load_from_disk` with large dataset from S3 runs into `botocore.exceptions.ClientError` | {
"avatar_url": "https://avatars.githubusercontent.com/u/15320635?v=4",
"events_url": "https://api.github.com/users/zotroneneis/events{/privacy}",
"followers_url": "https://api.github.com/users/zotroneneis/followers",
"following_url": "https://api.github.com/users/zotroneneis/following{/other_user}",
"gists_url": "https://api.github.com/users/zotroneneis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zotroneneis",
"id": 15320635,
"login": "zotroneneis",
"node_id": "MDQ6VXNlcjE1MzIwNjM1",
"organizations_url": "https://api.github.com/users/zotroneneis/orgs",
"received_events_url": "https://api.github.com/users/zotroneneis/received_events",
"repos_url": "https://api.github.com/users/zotroneneis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zotroneneis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zotroneneis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zotroneneis"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-01-23T12:37:57Z" | "2024-01-23T12:37:57Z" | null | NONE | null | ### Describe the bug
When loading a large dataset (>1000GB) from S3 I run into the following error:
```
Traceback (most recent call last):
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 113, in _error_wrapper
return await func(*args, **kwargs)
File "/home/alp/.local/lib/python3.10/site-packages/aiobotocore/client.py", line 383, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (RequestTimeTooSkewed) when calling the GetObject operation: The difference between the request time and the current time is too large.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/alp/phoneme-classification.monorepo/aws_sagemaker/data_processing/inspect_final_dataset.py", line 13, in <module>
dataset = load_from_disk("s3://speech-recognition-processed-data/whisper/de/train_data/", storage_options=storage_options)
File "/home/alp/.local/lib/python3.10/site-packages/datasets/load.py", line 1902, in load_from_disk
return Dataset.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)
File "/home/alp/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1686, in load_from_disk
fs.download(src_dataset_path, [dest_dataset_path.as](http://dest_dataset_path.as/)_posix(), recursive=True)
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/spec.py", line 1480, in download
return self.get(rpath, lpath, recursive=recursive, **kwargs)
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 121, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 106, in sync
raise return_result
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 61, in _runner
result[0] = await coro
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 604, in _get
return await _run_coros_in_chunks(
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 257, in _run_coros_in_chunks
await asyncio.gather(*chunk, return_exceptions=return_exceptions),
File "/usr/lib/python3.10/asyncio/tasks.py", line 408, in wait_for
return await fut
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 1193, in _get_file
body, content_length = await _open_file(range=0)
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 1184, in _open_file
resp = await self._call_s3(
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 348, in _call_s3
return await _error_wrapper(
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 140, in _error_wrapper
raise err
PermissionError: The difference between the request time and the current time is too large.
```
The usual problem for this error is that the time on my local machine is out of sync with the current time. However, this is not the case here. I checked the time and even reset it with no success. See resources here:
- https://stackoverflow.com/questions/4770635/s3-error-the-difference-between-the-request-time-and-the-current-time-is-too-la
- https://stackoverflow.com/questions/25964491/aws-s3-upload-fails-requesttimetooskewed
The error does not appear when loading a smaller dataset (e.g. our test set) from the same s3 path.
### Steps to reproduce the bug
1. Create large dataset
2. Try loading it from s3 using:
```
dataset = load_from_disk("s3://...", storage_options=storage_options)
```
### Expected behavior
Load dataset without running into this error.
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.3
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6611/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6611/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6610 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6610/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6610/comments | https://api.github.com/repos/huggingface/datasets/issues/6610/events | https://github.com/huggingface/datasets/issues/6610 | 2,095,643,711 | I_kwDODunzps586Pw_ | 6,610 | cast_column to Sequence(subfeatures_dict) has err | {
"avatar_url": "https://avatars.githubusercontent.com/u/16574677?v=4",
"events_url": "https://api.github.com/users/neiblegy/events{/privacy}",
"followers_url": "https://api.github.com/users/neiblegy/followers",
"following_url": "https://api.github.com/users/neiblegy/following{/other_user}",
"gists_url": "https://api.github.com/users/neiblegy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neiblegy",
"id": 16574677,
"login": "neiblegy",
"node_id": "MDQ6VXNlcjE2NTc0Njc3",
"organizations_url": "https://api.github.com/users/neiblegy/orgs",
"received_events_url": "https://api.github.com/users/neiblegy/received_events",
"repos_url": "https://api.github.com/users/neiblegy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neiblegy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neiblegy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neiblegy"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-23T09:32:32Z" | "2024-01-25T02:15:23Z" | "2024-01-25T02:15:23Z" | NONE | null | ### Describe the bug
I am working with the following demo code:
```
from datasets import load_dataset
from datasets.features import Sequence, Value, ClassLabel, Features
ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/")
ais_dataset = ais_dataset["train"]
def add_class(example):
example["my_labeled_bbox"] = {"bbox": [100,100,200,200], "label": "cat"}
return example
ais_dataset = ais_dataset.map(add_class, batched=False, num_proc=32)
ais_dataset = ais_dataset.cast_column("my_labeled_bbox", Sequence(
{
"bbox": Sequence(Value(dtype="int64")),
"label": ClassLabel(names=["cat", "dog"])
}))
print(ais_dataset[0])
```
However, executing this code results in an error:
```
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2111, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
int64
to
Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)
```
Upon examining the source code in datasets/table.py at line 2035:
```
if isinstance(feature, Sequence) and isinstance(feature.feature, dict):
feature = {
name: Sequence(subfeature, length=feature.length) for name, subfeature in feature.feature.items()
}
```
I noticed that if subfeature is of type Sequence, the code results in Sequence(Sequence(...), ...) and Sequence(ClassLabel(...), ...), which appears to be the source of the error.
### Steps to reproduce the bug
run my demo code
### Expected behavior
no exception
### Environment info
python 3.9
datasets: 2.16.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6610/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6610/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6609 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6609/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6609/comments | https://api.github.com/repos/huggingface/datasets/issues/6609/events | https://github.com/huggingface/datasets/issues/6609 | 2,095,085,650 | I_kwDODunzps584HhS | 6,609 | Wrong path for cache directory in offline mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/42117435?v=4",
"events_url": "https://api.github.com/users/je-santos/events{/privacy}",
"followers_url": "https://api.github.com/users/je-santos/followers",
"following_url": "https://api.github.com/users/je-santos/following{/other_user}",
"gists_url": "https://api.github.com/users/je-santos/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/je-santos",
"id": 42117435,
"login": "je-santos",
"node_id": "MDQ6VXNlcjQyMTE3NDM1",
"organizations_url": "https://api.github.com/users/je-santos/orgs",
"received_events_url": "https://api.github.com/users/je-santos/received_events",
"repos_url": "https://api.github.com/users/je-santos/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/je-santos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/je-santos/subscriptions",
"type": "User",
"url": "https://api.github.com/users/je-santos"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"",
""
] | "2024-01-23T01:47:19Z" | "2024-02-06T17:21:25Z" | "2024-02-06T17:21:25Z" | NONE | null | ### Describe the bug
Dear huggingfacers,
I'm trying to use a subset of the-stack dataset. When I run the command the first time
```
dataset = load_dataset(
path='bigcode/the-stack',
data_dir='data/fortran',
split='train' )
```
It downloads the files and caches them normally.
Nevertheless, since my compute nodes are not online (`HF_DATASETS_OFFLINE=1`) . Whenever I try to run the command again, the library is passing the wrong cache path:
`Cache directory for the-stack doesn't exist at /Users/user/.cache/huggingface/datasets/bigcode___the-stack/default-data_dir=data%2Ffortran-data_dir=data%2Ffortran`
when the right path is:
`'/Users/user/.cache/huggingface/datasets/bigcode___the-stack/default-data_dir=data\%2Ffortran`
Not sure why those redundancies are included in the path. If I try adding the correct path through the the cache_dir argument it throws an error:
ConnectionError: Couldn't reach the Hugging Face Hub for dataset 'bigcode/the-stack': Offline mode is enabled.
Your help with this issue is greatly appreciated. Thanks a lot for the great work.
### Steps to reproduce the bug
1:
`dataset = load_dataset(
path='bigcode/the-stack',
data_dir='data/fortran',
split='train' )`
2:
`HF_DATASETS_OFFLINE=1`
3:
`dataset = load_dataset(
path='bigcode/the-stack',
data_dir='data/fortran',
split='train' )`
### Expected behavior
being able to use the cached data
### Environment info
several different systems | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6609/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6609/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6608 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6608/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6608/comments | https://api.github.com/repos/huggingface/datasets/issues/6608/events | https://github.com/huggingface/datasets/pull/6608 | 2,094,153,292 | PR_kwDODunzps5ku_lN | 6,608 | Add `with_rank` param to `Dataset.filter` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-22T15:19:16Z" | "2024-01-29T16:43:11Z" | "2024-01-29T16:36:53Z" | CONTRIBUTOR | null | Fix #6564 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6608/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6608/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6608.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6608",
"merged_at": "2024-01-29T16:36:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6608.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6608"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6607 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6607/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6607/comments | https://api.github.com/repos/huggingface/datasets/issues/6607/events | https://github.com/huggingface/datasets/pull/6607 | 2,091,766,063 | PR_kwDODunzps5knGse | 6,607 | Update features.py to avoid bfloat16 unsupported error | {
"avatar_url": "https://avatars.githubusercontent.com/u/75697181?v=4",
"events_url": "https://api.github.com/users/skaulintel/events{/privacy}",
"followers_url": "https://api.github.com/users/skaulintel/followers",
"following_url": "https://api.github.com/users/skaulintel/following{/other_user}",
"gists_url": "https://api.github.com/users/skaulintel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/skaulintel",
"id": 75697181,
"login": "skaulintel",
"node_id": "MDQ6VXNlcjc1Njk3MTgx",
"organizations_url": "https://api.github.com/users/skaulintel/orgs",
"received_events_url": "https://api.github.com/users/skaulintel/received_events",
"repos_url": "https://api.github.com/users/skaulintel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/skaulintel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skaulintel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/skaulintel"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-01-20T00:39:44Z" | "2024-03-22T00:34:48Z" | null | NONE | null | Fixes https://github.com/huggingface/datasets/issues/6566
Let me know if there's any tests I need to clear. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6607/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6607/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6607.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6607",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6607.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6607"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6606 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6606/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6606/comments | https://api.github.com/repos/huggingface/datasets/issues/6606/events | https://github.com/huggingface/datasets/pull/6606 | 2,091,088,785 | PR_kwDODunzps5kk3KB | 6,606 | Dedicated RNG object for fingerprinting | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-19T18:34:47Z" | "2024-01-26T15:11:38Z" | "2024-01-26T15:05:34Z" | CONTRIBUTOR | null | Closes https://github.com/huggingface/datasets/issues/6604, closes https://github.com/huggingface/datasets/issues/2775 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6606/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6606/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6606.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6606",
"merged_at": "2024-01-26T15:05:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6606.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6606"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6605 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6605/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6605/comments | https://api.github.com/repos/huggingface/datasets/issues/6605/events | https://github.com/huggingface/datasets/issues/6605 | 2,090,188,376 | I_kwDODunzps58lb5Y | 6,605 | ELI5 no longer available, but referenced in example code | {
"avatar_url": "https://avatars.githubusercontent.com/u/81480344?v=4",
"events_url": "https://api.github.com/users/drdsgvo/events{/privacy}",
"followers_url": "https://api.github.com/users/drdsgvo/followers",
"following_url": "https://api.github.com/users/drdsgvo/following{/other_user}",
"gists_url": "https://api.github.com/users/drdsgvo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/drdsgvo",
"id": 81480344,
"login": "drdsgvo",
"node_id": "MDQ6VXNlcjgxNDgwMzQ0",
"organizations_url": "https://api.github.com/users/drdsgvo/orgs",
"received_events_url": "https://api.github.com/users/drdsgvo/received_events",
"repos_url": "https://api.github.com/users/drdsgvo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/drdsgvo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drdsgvo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/drdsgvo"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-19T10:21:52Z" | "2024-02-01T17:58:23Z" | "2024-02-01T17:58:22Z" | NONE | null | Here, an example code is given:
https://huggingface.co/docs/transformers/tasks/language_modeling
This code + article references the ELI5 dataset.
ELI5 is no longer available, as the ELI5 dataset page states: https://huggingface.co/datasets/eli5
"Defunct: Dataset "eli5" is defunct and no longer accessible due to unavailability of the source data.
Reddit recently [changed the terms of access](https://www.reddit.com/r/reddit/comments/12qwagm/an_update_regarding_reddits_api/) to its API, making the source data for this dataset unavailable.
"
Please change the example code to use a different dataset. | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6605/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6605/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6604 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6604/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6604/comments | https://api.github.com/repos/huggingface/datasets/issues/6604/events | https://github.com/huggingface/datasets/issues/6604 | 2,089,713,945 | I_kwDODunzps58joEZ | 6,604 | Transform fingerprint collisions due to setting fixed random seed | {
"avatar_url": "https://avatars.githubusercontent.com/u/6687910?v=4",
"events_url": "https://api.github.com/users/normster/events{/privacy}",
"followers_url": "https://api.github.com/users/normster/followers",
"following_url": "https://api.github.com/users/normster/following{/other_user}",
"gists_url": "https://api.github.com/users/normster/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/normster",
"id": 6687910,
"login": "normster",
"node_id": "MDQ6VXNlcjY2ODc5MTA=",
"organizations_url": "https://api.github.com/users/normster/orgs",
"received_events_url": "https://api.github.com/users/normster/received_events",
"repos_url": "https://api.github.com/users/normster/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/normster/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/normster/subscriptions",
"type": "User",
"url": "https://api.github.com/users/normster"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-19T06:32:25Z" | "2024-01-26T15:05:35Z" | "2024-01-26T15:05:35Z" | NONE | null | ### Describe the bug
The transform fingerprinting logic relies on the `random` library for random bits when the function is not hashable (e.g. bound methods as used in `trl`: https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L356). This causes collisions when the training code sets a fixed random seed, which is common practice: https://github.com/huggingface/alignment-handbook/blob/main/recipes/zephyr-7b-beta/sft/config_full.yaml#L45.
This results in fingerprint collisions which leads to silently loading incorrect cache files corresponding to completely different datasets.
### Steps to reproduce the bug
n/a
### Expected behavior
Use `uuid` v4 instead of `random.getrandbits()`
### Environment info
`datasets` main branch | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6604/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6604/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6603 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6603/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6603/comments | https://api.github.com/repos/huggingface/datasets/issues/6603/events | https://github.com/huggingface/datasets/issues/6603 | 2,089,230,766 | I_kwDODunzps58hyGu | 6,603 | datasets map `cache_file_name` does not work | {
"avatar_url": "https://avatars.githubusercontent.com/u/35147961?v=4",
"events_url": "https://api.github.com/users/ChenchaoZhao/events{/privacy}",
"followers_url": "https://api.github.com/users/ChenchaoZhao/followers",
"following_url": "https://api.github.com/users/ChenchaoZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenchaoZhao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ChenchaoZhao",
"id": 35147961,
"login": "ChenchaoZhao",
"node_id": "MDQ6VXNlcjM1MTQ3OTYx",
"organizations_url": "https://api.github.com/users/ChenchaoZhao/orgs",
"received_events_url": "https://api.github.com/users/ChenchaoZhao/received_events",
"repos_url": "https://api.github.com/users/ChenchaoZhao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ChenchaoZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenchaoZhao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ChenchaoZhao"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-01-18T23:08:30Z" | "2024-01-28T04:01:15Z" | null | NONE | null | ### Describe the bug
In the documentation `datasets.Dataset.map` arg `cache_file_name` is said to be a string, but it doesn't work.
### Steps to reproduce the bug
1. pick a dataset
2. write a map function
3. do `ds.map(..., cache_file_name='some_filename')`
4. it crashes
### Expected behavior
It will tell you the filename you specified does not exist or it will generate a new file and tell you the filename does not exist.
### Environment info
- `datasets` version: 2.16.0
- Platform: Linux-5.10.201-168.748.amzn2int.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.2
- Pandas version: 2.1.4
- `fsspec` version: 2023.12.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6603/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6603/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6602 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6602/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6602/comments | https://api.github.com/repos/huggingface/datasets/issues/6602/events | https://github.com/huggingface/datasets/issues/6602 | 2,089,217,483 | I_kwDODunzps58hu3L | 6,602 | Index error when data is large | {
"avatar_url": "https://avatars.githubusercontent.com/u/35147961?v=4",
"events_url": "https://api.github.com/users/ChenchaoZhao/events{/privacy}",
"followers_url": "https://api.github.com/users/ChenchaoZhao/followers",
"following_url": "https://api.github.com/users/ChenchaoZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenchaoZhao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ChenchaoZhao",
"id": 35147961,
"login": "ChenchaoZhao",
"node_id": "MDQ6VXNlcjM1MTQ3OTYx",
"organizations_url": "https://api.github.com/users/ChenchaoZhao/orgs",
"received_events_url": "https://api.github.com/users/ChenchaoZhao/received_events",
"repos_url": "https://api.github.com/users/ChenchaoZhao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ChenchaoZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenchaoZhao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ChenchaoZhao"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-01-18T23:00:47Z" | "2024-01-18T23:00:47Z" | null | NONE | null | ### Describe the bug
At `save_to_disk` step, the `max_shard_size` by default is `500MB`. However, one row of the dataset might be larger than `500MB` then the saving will throw an index error. Without looking at the source code, the bug is due to wrong calculation of number of shards which i think is
`total_size / min(max_shard_size, row_size)` which should be `total_size / max(max_shard_size, row_size)`
The fix is setting a larger `max_shard_size`
### Steps to reproduce the bug
1. create a dataset with large dense tensors per row
2. set a small `max_shard_size` say 1MB
3. `save_to_disk`
### Expected behavior
```
raise IndexError(f"Index {index} out of range for dataset of size {size}.")
IndexError: Index 10 out of range for dataset of size 10.
```
### Environment info
- `datasets` version: 2.16.0
- Platform: Linux-5.10.201-168.748.amzn2int.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.2
- Pandas version: 2.1.4
- `fsspec` version: 2023.12.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6602/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6602/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6601 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6601/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6601/comments | https://api.github.com/repos/huggingface/datasets/issues/6601/events | https://github.com/huggingface/datasets/pull/6601 | 2,088,624,054 | PR_kwDODunzps5kcWN0 | 6,601 | add safety checks when using only part of dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/63422923?v=4",
"events_url": "https://api.github.com/users/benseddikismail/events{/privacy}",
"followers_url": "https://api.github.com/users/benseddikismail/followers",
"following_url": "https://api.github.com/users/benseddikismail/following{/other_user}",
"gists_url": "https://api.github.com/users/benseddikismail/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/benseddikismail",
"id": 63422923,
"login": "benseddikismail",
"node_id": "MDQ6VXNlcjYzNDIyOTIz",
"organizations_url": "https://api.github.com/users/benseddikismail/orgs",
"received_events_url": "https://api.github.com/users/benseddikismail/received_events",
"repos_url": "https://api.github.com/users/benseddikismail/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/benseddikismail/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benseddikismail/subscriptions",
"type": "User",
"url": "https://api.github.com/users/benseddikismail"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-01-18T16:16:59Z" | "2024-02-08T14:33:10Z" | null | NONE | null | Added some checks to prevent errors that arrise when using evaluate.py on only a portion of the squad 2.0 dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6601/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6601/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6601.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6601",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6601.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6601"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6600 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6600/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6600/comments | https://api.github.com/repos/huggingface/datasets/issues/6600/events | https://github.com/huggingface/datasets/issues/6600 | 2,088,446,385 | I_kwDODunzps58eymx | 6,600 | Loading CSV exported dataset has unexpected format | {
"avatar_url": "https://avatars.githubusercontent.com/u/59572247?v=4",
"events_url": "https://api.github.com/users/OrianeN/events{/privacy}",
"followers_url": "https://api.github.com/users/OrianeN/followers",
"following_url": "https://api.github.com/users/OrianeN/following{/other_user}",
"gists_url": "https://api.github.com/users/OrianeN/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/OrianeN",
"id": 59572247,
"login": "OrianeN",
"node_id": "MDQ6VXNlcjU5NTcyMjQ3",
"organizations_url": "https://api.github.com/users/OrianeN/orgs",
"received_events_url": "https://api.github.com/users/OrianeN/received_events",
"repos_url": "https://api.github.com/users/OrianeN/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/OrianeN/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OrianeN/subscriptions",
"type": "User",
"url": "https://api.github.com/users/OrianeN"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-01-18T14:48:27Z" | "2024-01-23T14:42:32Z" | null | NONE | null | ### Describe the bug
I wanted to be able to save a HF dataset for translations and load it again in another script, but I'm a bit confused with the documentation and the result I've got so I'm opening this issue to ask if this behavior is as expected.
### Steps to reproduce the bug
The documentation I've mainly consulted is https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/loading_methods#datasets.load_dataset and https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset (where I've found `.to_csv()`)
```python
# Load a dataset of translations
test_dataset = load_dataset("opus100", name="en-fr", split="test")
# Save with .to_csv()
test_csv_path = "try_testset_save.csv"
test_dataset.to_csv(test_csv_path)
# Load dataset from the CSV
loaded_dataset = load_dataset("csv", data_files=test_csv_path)
print(test_dataset_fromfile[0]["translation"])
print(test_dataset_fromfile[0]["translation"]["en"])
```
```
Creating CSV from Arrow format: 100%
2/2 [00:00<00:00, 47.99ba/s]
Downloading data files: 100%
1/1 [00:00<00:00, 65.33it/s]
Extracting data files: 100%
1/1 [00:00<00:00, 42.10it/s]
Generating train split:
2000/0 [00:00<00:00, 47486.09 examples/s]
{'en': "She wasn't going to vaccinate her kid against polio, no way.", 'fr': 'Elle ne vaccinerait pas son enfant contre la polio. Pas question.'}
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[29], line 11
9 loaded_dataset = load_dataset("csv", data_files=test_csv_path)
10 print(test_dataset_fromfile[0]["translation"])
---> 11 print(test_dataset_fromfile[0]["translation"]["en"])
TypeError: string indices must be integers, not 'str'
```
### Expected behavior
Each translation was saved as a stringified dict like `"{'en': ""She wasn't going to vaccinate her kid against polio, no way."", 'fr': 'Elle ne vaccinerait pas son enfant contre la polio. Pas question.'}"` where I would have expected 2 columns (1st with English segments, and 2nd with French segments), and I was expecting `load_dataset` to infer the type of feature automatically as I haven't seen anything about it in the documentation.
Do you have an example of how to effectively save and load datasets of translations ?
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-3.10.0-1160.36.2.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.11.5
- `huggingface_hub` version: 0.16.4
- PyArrow version: 14.0.2
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6600/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6600/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6599 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6599/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6599/comments | https://api.github.com/repos/huggingface/datasets/issues/6599/events | https://github.com/huggingface/datasets/issues/6599 | 2,086,684,664 | I_kwDODunzps58YEf4 | 6,599 | Easy way to segment into 30s snippets given an m4a file and a vtt file | {
"avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4",
"events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}",
"followers_url": "https://api.github.com/users/RonanKMcGovern/followers",
"following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}",
"gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RonanKMcGovern",
"id": 78278410,
"login": "RonanKMcGovern",
"node_id": "MDQ6VXNlcjc4Mjc4NDEw",
"organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs",
"received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events",
"repos_url": "https://api.github.com/users/RonanKMcGovern/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RonanKMcGovern"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"",
""
] | "2024-01-17T17:51:40Z" | "2024-01-23T10:42:17Z" | "2024-01-22T15:35:49Z" | NONE | null | ### Feature request
Uploading datasets is straightforward thanks to the ability to push Audio to hub. However, it would be nice if the data (text and audio) could be segmented when being pushed (if not possible already).
### Motivation
It's easy to create a vtt file from an audio file. If there could be auto-segmenting, this would make the creation of datasets much faster.
### Your contribution
I have made a custom script to do this but it's not all that clean - uses librosa and pydub. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6599/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6599/timeline | null | not_planned | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6598 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6598/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6598/comments | https://api.github.com/repos/huggingface/datasets/issues/6598/events | https://github.com/huggingface/datasets/issues/6598 | 2,084,236,605 | I_kwDODunzps58Ou09 | 6,598 | Unexpected keyword argument 'hf' when downloading CSV dataset from S3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/5592111?v=4",
"events_url": "https://api.github.com/users/dguenms/events{/privacy}",
"followers_url": "https://api.github.com/users/dguenms/followers",
"following_url": "https://api.github.com/users/dguenms/following{/other_user}",
"gists_url": "https://api.github.com/users/dguenms/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dguenms",
"id": 5592111,
"login": "dguenms",
"node_id": "MDQ6VXNlcjU1OTIxMTE=",
"organizations_url": "https://api.github.com/users/dguenms/orgs",
"received_events_url": "https://api.github.com/users/dguenms/received_events",
"repos_url": "https://api.github.com/users/dguenms/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dguenms/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dguenms/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dguenms"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-01-16T15:16:01Z" | "2024-03-22T02:08:06Z" | null | NONE | null | ### Describe the bug
I receive this error message when using `load_dataset` with "csv" path and `dataset_files=s3://...`:
```
TypeError: Session.__init__() got an unexpected keyword argument 'hf'
```
I found a similar issue here: https://stackoverflow.com/questions/77596258/aws-issue-load-dataset-from-s3-fails-with-unexpected-keyword-argument-error-in
Full stacktrace:
```
.../site-packages/datasets/load.py:2549: in load_dataset
builder_instance.download_and_prepare(
.../site-packages/datasets/builder.py:1005: in download_and_prepare
self._download_and_prepare(
.../site-packages/datasets/builder.py:1078: in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
.../site-packages/datasets/packaged_modules/csv/csv.py:147: in _split_generators
data_files = dl_manager.download_and_extract(self.config.data_files)
.../site-packages/datasets/download/download_manager.py:562: in download_and_extract
return self.extract(self.download(url_or_urls))
.../site-packages/datasets/download/download_manager.py:426: in download
downloaded_path_or_paths = map_nested(
.../site-packages/datasets/utils/py_utils.py:466: in map_nested
mapped = [
.../site-packages/datasets/utils/py_utils.py:467: in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
.../site-packages/datasets/utils/py_utils.py:387: in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
.../site-packages/datasets/utils/py_utils.py:387: in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
.../site-packages/datasets/utils/py_utils.py:370: in _single_map_nested
return function(data_struct)
.../site-packages/datasets/download/download_manager.py:451: in _download
out = cached_path(url_or_filename, download_config=download_config)
.../site-packages/datasets/utils/file_utils.py:188: in cached_path
output_path = get_from_cache(
...1/site-packages/datasets/utils/file_utils.py:511: in get_from_cache
response = fsspec_head(url, storage_options=storage_options)
.../site-packages/datasets/utils/file_utils.py:316: in fsspec_head
fs, _, paths = fsspec.get_fs_token_paths(url, storage_options=storage_options)
.../site-packages/fsspec/core.py:622: in get_fs_token_paths
fs = filesystem(protocol, **inkwargs)
.../site-packages/fsspec/registry.py:290: in filesystem
return cls(**storage_options)
.../site-packages/fsspec/spec.py:79: in __call__
obj = super().__call__(*args, **kwargs)
.../site-packages/s3fs/core.py:187: in __init__
self.s3 = self.connect()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <s3fs.core.S3FileSystem object at 0x1500a1310>, refresh = True
def connect(self, refresh=True):
"""
Establish S3 connection object.
Parameters
----------
refresh : bool
Whether to create new session/client, even if a previous one with
the same parameters already exists. If False (default), an
existing one will be used if possible
"""
if refresh is False:
# back compat: we store whole FS instance now
return self.s3
anon, key, secret, kwargs, ckwargs, token, ssl = (
self.anon, self.key, self.secret, self.kwargs,
self.client_kwargs, self.token, self.use_ssl)
if not self.passed_in_session:
> self.session = botocore.session.Session(**self.kwargs)
E TypeError: Session.__init__() got an unexpected keyword argument 'hf'
```
### Steps to reproduce the bug
1. Assuming a valid CSV file located at `s3://bucket/data.csv`
2. Run the below code:
```
storage_options = {
"key": "...",
"secret": "...",
"client_kwargs": {
"endpoint_url": "...",
}
}
load_dataset("csv", data_files="s3://bucket/data.csv", storage_options=storage_options)
```
Encountered in version `2.16.1` but also reproduced in `2.16.0` and `2.15.0`.
Note: I encountered this in a unit test using a `moto` mock for S3, however since the error occurs before the session is instantiated, it should not be the issue.
### Expected behavior
No exception is raised, the boto3 session is created successfully, and the CSV file is downloaded successfully and returned as a dataset.
===
After some research I found that `DownloadConfig` has a `__post_init__` method that always forces this value to be set in its `storage_options`, even though in case of an S3 location the storage options get passed on to the S3 Session which does not expect this parameter. I assume this parameter is needed when reading from the huggingface hub and should not be set in this context.
Unfortunately there is nothing the user can do to work around it. Even if you manually do something like:
```
download_config = DownloadConfig()
del download_config.storage_options["hf"]
load_dataset("csv", data_files="s3://bucket/data.csv", download_config=download_config)
```
the library will still reinsert this parameter when `download_config = self.download_config.copy()` in line 418 of `download_manager.py` (`DownloadManager.download`).
Therefore `load_dataset` currently cannot be used to read a dataset in CSV format from an S3 location.
### Environment info
- `datasets` version: 2.16.1
- Platform: macOS-14.2.1-arm64-arm-64bit
- Python version: 3.11.7
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.2
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0
| {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6598/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6598/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6597 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6597/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6597/comments | https://api.github.com/repos/huggingface/datasets/issues/6597/events | https://github.com/huggingface/datasets/issues/6597 | 2,083,708,521 | I_kwDODunzps58Mt5p | 6,597 | Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"",
""
] | "2024-01-16T11:27:07Z" | "2024-02-05T12:29:37Z" | "2024-02-05T12:29:37Z" | MEMBER | null | While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace.
## Steps to reproduce the bug
The command:
```python
commit_info = ds.push_to_hub(
"caner",
config_name="default",
commit_message="Convert dataset to Parquet",
commit_description="Convert dataset to Parquet.",
create_pr=True,
token=token,
)
```
creates the additional dataset `albertvillanova/caner`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6597/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6597/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6596 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6596/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6596/comments | https://api.github.com/repos/huggingface/datasets/issues/6596/events | https://github.com/huggingface/datasets/pull/6596 | 2,083,108,156 | PR_kwDODunzps5kJceH | 6,596 | Drop redundant None guard. | {
"avatar_url": "https://avatars.githubusercontent.com/u/5203025?v=4",
"events_url": "https://api.github.com/users/xkszltl/events{/privacy}",
"followers_url": "https://api.github.com/users/xkszltl/followers",
"following_url": "https://api.github.com/users/xkszltl/following{/other_user}",
"gists_url": "https://api.github.com/users/xkszltl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xkszltl",
"id": 5203025,
"login": "xkszltl",
"node_id": "MDQ6VXNlcjUyMDMwMjU=",
"organizations_url": "https://api.github.com/users/xkszltl/orgs",
"received_events_url": "https://api.github.com/users/xkszltl/received_events",
"repos_url": "https://api.github.com/users/xkszltl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xkszltl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xkszltl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xkszltl"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-16T06:31:54Z" | "2024-01-16T17:16:16Z" | "2024-01-16T17:05:52Z" | CONTRIBUTOR | null | `xxx if xxx is not None else None` is no-op. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6596/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6596/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6596.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6596",
"merged_at": "2024-01-16T17:05:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6596.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6596"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6595 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6595/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6595/comments | https://api.github.com/repos/huggingface/datasets/issues/6595/events | https://github.com/huggingface/datasets/issues/6595 | 2,082,896,148 | I_kwDODunzps58JnkU | 6,595 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-16T02:03:09Z" | "2024-01-27T18:26:33Z" | "2024-01-26T02:28:32Z" | NONE | null | ### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
1. Map dataset
2. Save to disk
3. Try to upload:
```
import datasets
from datasets import load_from_disk
dataset = load_from_disk("ds")
datasets.config.DEFAULT_MAX_BATCH_SIZE = 1
dataset.push_to_hub("kopyl/ds", private=True, max_shard_size="500MB")
```
And i get this error:
`pyarrow.lib.ArrowNotImplementedError: Unhandled type for Arrow to Parquet schema conversion: halffloat`
Full traceback:
```
>>> dataset.push_to_hub("kopyl/3M_icons_monochrome_only_no_captioning_mapped-for-SDXL-2", private=True, max_shard_size="500MB")
Map: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1451/1451 [00:00<00:00, 6827.40 examples/s]
Uploading the dataset shards: 0%| | 0/2099 [00:00<?, ?it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py", line 1705, in push_to_hub
split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py", line 5208, in _push_parquet_shards_to_hub
shard.to_parquet(buffer)
File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py", line 4931, in to_parquet
return ParquetDatasetWriter(self, path_or_buf, batch_size=batch_size, **parquet_writer_kwargs).write()
File "/usr/local/lib/python3.10/dist-packages/datasets/io/parquet.py", line 129, in write
written = self._write(file_obj=self.path_or_buf, batch_size=batch_size, **self.parquet_writer_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/io/parquet.py", line 141, in _write
writer = pq.ParquetWriter(file_obj, schema=schema, **parquet_writer_kwargs)
File "/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py", line 1016, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 1869, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unhandled type for Arrow to Parquet schema conversion: halffloat
```
Smaller datasets with the same way of saving and pushing work wonders. Big ones are not.
I'm currently trying to upload dataset like this:
`HfApi().upload_folder...`
But i'm not sure that in this case "load_dataset" would work well.
This setting num_shards does not help too:
```
dataset.push_to_hub("kopyl/3M_icons_monochrome_only_no_captioning_mapped-for-SDXL-2", private=True, num_shards={'train': 500})
```
Tried 3000, 500, 478, 100
Also do you know if it's possible to push a dataset with multiple processes? It would take an eternity pushing 1TB...
### Steps to reproduce the bug
Described above
### Expected behavior
Should be able to upload...
### Environment info
Total dataset size: 978G
Amount of `.arrow` files: 2101
Each `.arrow` file size: 477M (i know 477 megabytes * 2101 does not equal 978G, but i just checked the size of a couple `.arrow` files, i don't know if some might have different size)
Some files:
- "ds/train/state.json": https://pastebin.com/tJ3ZLGAg
- "ds/train/dataset_info.json": https://pastebin.com/JdXMQ5ih | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6595/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6595/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6594 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6594/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6594/comments | https://api.github.com/repos/huggingface/datasets/issues/6594/events | https://github.com/huggingface/datasets/issues/6594 | 2,082,748,275 | I_kwDODunzps58JDdz | 6,594 | IterableDataset sharding logic needs improvement | {
"avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4",
"events_url": "https://api.github.com/users/rwightman/events{/privacy}",
"followers_url": "https://api.github.com/users/rwightman/followers",
"following_url": "https://api.github.com/users/rwightman/following{/other_user}",
"gists_url": "https://api.github.com/users/rwightman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rwightman",
"id": 5702664,
"login": "rwightman",
"node_id": "MDQ6VXNlcjU3MDI2NjQ=",
"organizations_url": "https://api.github.com/users/rwightman/orgs",
"received_events_url": "https://api.github.com/users/rwightman/received_events",
"repos_url": "https://api.github.com/users/rwightman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rwightman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rwightman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rwightman"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-01-15T22:22:36Z" | "2024-01-15T22:25:10Z" | null | NONE | null | ### Describe the bug
The sharding of IterableDatasets with respect to distributed and dataloader worker processes appears problematic with significant performance traps and inconsistencies wrt to distributed train processes vs worker processes.
Splitting across num_workers (per train process loader processes) and world_size (distributed training processes) appears inconsistent.
* worker split: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/iterable_dataset.py#L1266-L1283
* distributed split: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/iterable_dataset.py#L1335-L1356
In the case of the distributed split, there is a modulus check that flips between two very different behaviours, why is this different than splitting across the data loader workers? For IterableDatasets the DataLoaders worker processes are independent, so whether it's workers within one train process or across a distributed world the shards should be distributed the same, across `world_size * num_worker` independent workers in either case...
Further, the fallback case when the `n_shards % world_size == 0` check fails is a rather extreme change. I argue it is not desirable to do that implicitly, it should be an explicit case for specific scenarios (ie reliable validation). A train scenario would likely be much better handled with improved wrapping / stopping behaviour to eg also fix #6437. Changing from stepping shards to stepping samples means that every single process reads ALL of the shards. This was never an intended default for sharded training, shards gain their performance advantage in large scale distributed training by explicitly avoiding the need to have every process overlapping in the data they read, by default, only the data allocated to each process via their assigned shards should be read in each pass of the dataset.
Using a large scale CLIP example, some of the larger datasets have 10-20k shards across 100+TB of data. Training with 1000 GPUs we are switching between reading 100 terabytes per epoch to 100 petabytes if say change 20k % 1000 and drop one gpu-node to 20k % 992.
The 'step over samples' case might be worth the overhead in specific validation scenarios where gaurantees of at least/most once samples seen are more important and do not make up a significant portion of train time or are done in smaller world sizes outside of train.
### Steps to reproduce the bug
N/A
### Expected behavior
We have an iterable dataset with N shards, to split across workers
* shuffle shards (same seed across all train processes)
* step shard iterator across distributed processes
* step shard iterator across dataloader worker processes
* shuffle samples in every worker via shuffle buffer (different seed in each worker, but ideally controllable (based on base seed + worker id + epoch).
* end up with (possibly uneven) number of shards per worker but each shard only ever accessed by 1 worker per pass (epoch)
### Environment info
N/A | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6594/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6594/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6592 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6592/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6592/comments | https://api.github.com/repos/huggingface/datasets/issues/6592/events | https://github.com/huggingface/datasets/issues/6592 | 2,082,410,257 | I_kwDODunzps58Hw8R | 6,592 | Logs are delayed when doing .map when `docker logs` | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-15T17:05:21Z" | "2024-02-12T17:35:21Z" | "2024-02-12T17:35:21Z" | NONE | null | ### Describe the bug
When I run my SD training in a Docker image and then listen to logs like `docker logs train -f`, the progress bar is delayed.
It's updating every few percent.
When you have a large dataset that has to be mapped (like 1+ million samples), it's crucial to see the updates in real-time, not every couple hours to make sure nothing got frozen or broken
### Steps to reproduce the bug
1. Run any huge dataset processing as a Docker image
2. `docker logs image_name` to it
### Expected behavior
...
### Environment info
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6592/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6592/timeline | null | not_planned | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6591 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6591/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6591/comments | https://api.github.com/repos/huggingface/datasets/issues/6591/events | https://github.com/huggingface/datasets/issues/6591 | 2,082,378,957 | I_kwDODunzps58HpTN | 6,591 | The datasets models housed in Dropbox can't support a lot of users downloading them | {
"avatar_url": "https://avatars.githubusercontent.com/u/4933774?v=4",
"events_url": "https://api.github.com/users/RDaneelOlivav/events{/privacy}",
"followers_url": "https://api.github.com/users/RDaneelOlivav/followers",
"following_url": "https://api.github.com/users/RDaneelOlivav/following{/other_user}",
"gists_url": "https://api.github.com/users/RDaneelOlivav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RDaneelOlivav",
"id": 4933774,
"login": "RDaneelOlivav",
"node_id": "MDQ6VXNlcjQ5MzM3NzQ=",
"organizations_url": "https://api.github.com/users/RDaneelOlivav/orgs",
"received_events_url": "https://api.github.com/users/RDaneelOlivav/received_events",
"repos_url": "https://api.github.com/users/RDaneelOlivav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RDaneelOlivav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RDaneelOlivav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RDaneelOlivav"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-15T16:43:38Z" | "2024-01-22T23:18:09Z" | "2024-01-22T23:18:09Z" | NONE | null | ### Describe the bug
I'm using the datasets
```
from datasets import load_dataset, Audio
dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
And it seems that sometimes when I imagine a lot of users are accessing the same resources, the Dropbox host fails:
`raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})") ConnectionError: Couldn't reach https://www.dropbox.com/s/e2us0hcs3ilr20e/MInDS-14.zip?dl=1 (error 429)`
My question is if we can somehow host these files elsewhere or can you change the limit of simultaneous users accessing those resources or any other solution?
Also, has anyone had this issue before?
Thanks
### Steps to reproduce the bug
1: Create a python script like so:
```
from datasets import load_dataset, Audio
dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
2: Execute this by a certain number of users at the same time
### Expected behavior
I woudl expect that this shouldnt happen unless its a huge amount of users, which it is not the case
### Environment info
This was done in an Ubuntu 22 environment. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6591/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6591/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6590 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6590/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6590/comments | https://api.github.com/repos/huggingface/datasets/issues/6590/events | https://github.com/huggingface/datasets/issues/6590 | 2,082,000,084 | I_kwDODunzps58GMzU | 6,590 | Feature request: Multi-GPU dataset mapping for SDXL training | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"",
""
] | "2024-01-15T13:06:06Z" | "2024-01-15T13:07:07Z" | null | NONE | null | ### Feature request
We need to speed up SDXL dataset pre-process. Please make it possible to use multiple GPUs for the [official SDXL trainer](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py) :)
### Motivation
Pre-computing 3 million of images takes around 2 days.
Would be nice to be able to be able to do multi-GPU (or even better – multi-GPU + multi-node) vae and embedding precompute...
### Your contribution
I'm not sure i can wrap my head around the multi-GPU mapping...
Plus it's too expensive for me to take x2 A100 and spend a day just figuring out the staff since I don't have a job right now. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6590/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6590/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6589 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6589/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6589/comments | https://api.github.com/repos/huggingface/datasets/issues/6589/events | https://github.com/huggingface/datasets/issues/6589 | 2,081,358,619 | I_kwDODunzps58DwMb | 6,589 | After `2.16.0` version, there are `PermissionError` when users use shared cache_dir | {
"avatar_url": "https://avatars.githubusercontent.com/u/106717516?v=4",
"events_url": "https://api.github.com/users/minhopark-neubla/events{/privacy}",
"followers_url": "https://api.github.com/users/minhopark-neubla/followers",
"following_url": "https://api.github.com/users/minhopark-neubla/following{/other_user}",
"gists_url": "https://api.github.com/users/minhopark-neubla/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/minhopark-neubla",
"id": 106717516,
"login": "minhopark-neubla",
"node_id": "U_kgDOBlxhTA",
"organizations_url": "https://api.github.com/users/minhopark-neubla/orgs",
"received_events_url": "https://api.github.com/users/minhopark-neubla/received_events",
"repos_url": "https://api.github.com/users/minhopark-neubla/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/minhopark-neubla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minhopark-neubla/subscriptions",
"type": "User",
"url": "https://api.github.com/users/minhopark-neubla"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-15T06:46:27Z" | "2024-02-02T07:55:38Z" | "2024-01-30T15:28:38Z" | NONE | null | ### Describe the bug
- We use shared `cache_dir` using `HF_HOME="{shared_directory}"`
- After dataset version 2.16.0, datasets uses `filelock` package for file locking #6445
- But, `filelock` package make `.lock` file with `644` permission
- Dataset is not available to other users except the user who created the lock file via `load_dataset`.
### Steps to reproduce the bug
1. `pip install datasets==2.16.0`
2. `export HF_HOME="{shared_directory}"`
3. download dataset with `load_dataset`
4. logout and login another user
5. `pip install datasets==2.16.0`
6. `export HF_HOME="{shared_directory}"`
7. download dataset with `load_dataset`
8. `PermissionError` occurs
### Expected behavior
- Users can share `cache_dir` using environment variable `HF_HOME`
### Environment info
- python == 3.9.10
- datasets == 2.16.0
- ubuntu 22.04
- shared_directory has ACL
![image (1)](https://github.com/huggingface/datasets/assets/106717516/5ca759db-ad0c-4883-9a97-9c8fccd00d8a)
- users are same group (developers)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6589/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6589/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6588 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6588/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6588/comments | https://api.github.com/repos/huggingface/datasets/issues/6588/events | https://github.com/huggingface/datasets/issues/6588 | 2,081,284,253 | I_kwDODunzps58DeCd | 6,588 | fix os.listdir return name is empty string | {
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/d710055071",
"id": 12895488,
"login": "d710055071",
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"repos_url": "https://api.github.com/users/d710055071/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"type": "User",
"url": "https://api.github.com/users/d710055071"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-15T05:34:36Z" | "2024-01-24T10:08:29Z" | "2024-01-24T10:08:29Z" | CONTRIBUTOR | null | ### Describe the bug
xlistdir return name is empty string
Overloaded os.listdir
### Steps to reproduce the bug
```python
from datasets.download.streaming_download_manager import xjoin
from datasets.download.streaming_download_manager import xlistdir
config = DownloadConfig(storage_options=options)
manger = StreamingDownloadManager("ILSVRC2012",download_config=config)
input_path = "lakefs://datalab/main/imagenet/ILSVRC2012.zip"
download_files = manger.download_and_extract(input_path)
current_dir = xjoin(download_files,"ILSVRC2012/Images/ILSVRC2012_img_train")
folder_list = xlistdir(current_dir)
```
in xlistdir function
Obj ["name"] ends with "/"
last return ""
### Expected behavior
Obj ["name"] ends with "/"
return folder name
### Environment info
no | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6588/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6588/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6587 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6587/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6587/comments | https://api.github.com/repos/huggingface/datasets/issues/6587/events | https://github.com/huggingface/datasets/pull/6587 | 2,080,348,016 | PR_kwDODunzps5kAT_5 | 6,587 | Allow concatenation of datasets with mixed structs | {
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Dref360",
"id": 8976546,
"login": "Dref360",
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"repos_url": "https://api.github.com/users/Dref360/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Dref360"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-13T15:33:20Z" | "2024-02-15T15:20:06Z" | "2024-02-08T14:38:32Z" | CONTRIBUTOR | null | Fixes #6466
The idea is to do a recursive check for structs. PyArrow handles it well enough.
For a demo you can do:
```python
from datasets import Dataset, concatenate_datasets
ds = Dataset.from_dict({'speaker': [{'name': 'Ben', 'email': None}]})
ds2 = Dataset.from_dict({'speaker': [{'name': 'Fred', 'email': 'abc@aol.com'}]})
print(concatenate_datasets([ds, ds2]).features)
print(concatenate_datasets([ds, ds2]).to_dict())
```
Now both the features and the rows are fixed.
I note that Sequence suffers from the same problem, so I can fix that in a future PR once this one is merged. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6587/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6587/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6587.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6587",
"merged_at": "2024-02-08T14:38:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6587.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6587"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6586 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6586/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6586/comments | https://api.github.com/repos/huggingface/datasets/issues/6586/events | https://github.com/huggingface/datasets/pull/6586 | 2,079,192,651 | PR_kwDODunzps5j8aJn | 6,586 | keep more info in DatasetInfo.from_merge #6585 | {
"avatar_url": "https://avatars.githubusercontent.com/u/135010976?v=4",
"events_url": "https://api.github.com/users/JochenSiegWork/events{/privacy}",
"followers_url": "https://api.github.com/users/JochenSiegWork/followers",
"following_url": "https://api.github.com/users/JochenSiegWork/following{/other_user}",
"gists_url": "https://api.github.com/users/JochenSiegWork/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JochenSiegWork",
"id": 135010976,
"login": "JochenSiegWork",
"node_id": "U_kgDOCAwaoA",
"organizations_url": "https://api.github.com/users/JochenSiegWork/orgs",
"received_events_url": "https://api.github.com/users/JochenSiegWork/received_events",
"repos_url": "https://api.github.com/users/JochenSiegWork/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JochenSiegWork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JochenSiegWork/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JochenSiegWork"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-12T16:08:16Z" | "2024-01-26T15:59:35Z" | "2024-01-26T15:53:28Z" | CONTRIBUTOR | null | * try not to merge DatasetInfos if they're equal
* fixes losing DatasetInfo during parallel Dataset.map | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6586/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6586/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6586.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6586",
"merged_at": "2024-01-26T15:53:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6586.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6586"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6585 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6585/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6585/comments | https://api.github.com/repos/huggingface/datasets/issues/6585/events | https://github.com/huggingface/datasets/issues/6585 | 2,078,874,005 | I_kwDODunzps576RmV | 6,585 | losing DatasetInfo in Dataset.map when num_proc > 1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/135010976?v=4",
"events_url": "https://api.github.com/users/JochenSiegWork/events{/privacy}",
"followers_url": "https://api.github.com/users/JochenSiegWork/followers",
"following_url": "https://api.github.com/users/JochenSiegWork/following{/other_user}",
"gists_url": "https://api.github.com/users/JochenSiegWork/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JochenSiegWork",
"id": 135010976,
"login": "JochenSiegWork",
"node_id": "U_kgDOCAwaoA",
"organizations_url": "https://api.github.com/users/JochenSiegWork/orgs",
"received_events_url": "https://api.github.com/users/JochenSiegWork/received_events",
"repos_url": "https://api.github.com/users/JochenSiegWork/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JochenSiegWork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JochenSiegWork/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JochenSiegWork"
} | [] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/135010976?v=4",
"events_url": "https://api.github.com/users/JochenSiegWork/events{/privacy}",
"followers_url": "https://api.github.com/users/JochenSiegWork/followers",
"following_url": "https://api.github.com/users/JochenSiegWork/following{/other_user}",
"gists_url": "https://api.github.com/users/JochenSiegWork/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JochenSiegWork",
"id": 135010976,
"login": "JochenSiegWork",
"node_id": "U_kgDOCAwaoA",
"organizations_url": "https://api.github.com/users/JochenSiegWork/orgs",
"received_events_url": "https://api.github.com/users/JochenSiegWork/received_events",
"repos_url": "https://api.github.com/users/JochenSiegWork/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JochenSiegWork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JochenSiegWork/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JochenSiegWork"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/135010976?v=4",
"events_url": "https://api.github.com/users/JochenSiegWork/events{/privacy}",
"followers_url": "https://api.github.com/users/JochenSiegWork/followers",
"following_url": "https://api.github.com/users/JochenSiegWork/following{/other_user}",
"gists_url": "https://api.github.com/users/JochenSiegWork/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JochenSiegWork",
"id": 135010976,
"login": "JochenSiegWork",
"node_id": "U_kgDOCAwaoA",
"organizations_url": "https://api.github.com/users/JochenSiegWork/orgs",
"received_events_url": "https://api.github.com/users/JochenSiegWork/received_events",
"repos_url": "https://api.github.com/users/JochenSiegWork/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JochenSiegWork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JochenSiegWork/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JochenSiegWork"
}
] | null | [
"",
""
] | "2024-01-12T13:39:19Z" | "2024-01-12T14:08:24Z" | null | CONTRIBUTOR | null | ### Describe the bug
Hello and thanks for developing this package!
When I process a Dataset with the map function using multiple processors some set attributes of the DatasetInfo get lost and are None in the resulting Dataset.
### Steps to reproduce the bug
```python
from datasets import Dataset, DatasetInfo
def run_map(num_proc):
dataset = Dataset.from_dict(
{"col1": [0, 1], "col2": [3, 4]},
info=DatasetInfo(
dataset_name="my_dataset",
),
)
ds = dataset.map(lambda x: x, num_proc=num_proc)
print(ds.info.dataset_name)
run_map(1)
run_map(2)
```
This puts out:
```bash
Map: 100%|██████████| 2/2 [00:00<00:00, 724.66 examples/s]
my_dataset
Map (num_proc=2): 100%|██████████| 2/2 [00:00<00:00, 18.25 examples/s]
None
```
### Expected behavior
I expect the DatasetInfo to be kept as it was and there should be no difference in the output of running map with num_proc=1 and num_proc=2.
Expected output:
```bash
Map: 100%|██████████| 2/2 [00:00<00:00, 724.66 examples/s]
my_dataset
Map (num_proc=2): 100%|██████████| 2/2 [00:00<00:00, 18.25 examples/s]
my_dataset
```
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.17
- Python version: 3.8.18
- `huggingface_hub` version: 0.20.2
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
- `fsspec` version: 2023.9.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6585/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6585/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6584 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6584/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6584/comments | https://api.github.com/repos/huggingface/datasets/issues/6584/events | https://github.com/huggingface/datasets/issues/6584 | 2,078,454,878 | I_kwDODunzps574rRe | 6,584 | np.fromfile not supported | {
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/d710055071",
"id": 12895488,
"login": "d710055071",
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"repos_url": "https://api.github.com/users/d710055071/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"type": "User",
"url": "https://api.github.com/users/d710055071"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-01-12T09:46:17Z" | "2024-01-15T05:20:50Z" | null | CONTRIBUTOR | null | How to do np.fromfile to use it like np.load
```python
def xnumpy_fromfile(filepath_or_buffer, *args, download_config: Optional[DownloadConfig] = None, **kwargs):
import numpy as np
if hasattr(filepath_or_buffer, "read"):
return np.fromfile(filepath_or_buffer, *args, **kwargs)
else:
filepath_or_buffer = str(filepath_or_buffer)
return np.fromfile(xopen(filepath_or_buffer, "rb", download_config=download_config).read(), *args, **kwargs)
```
this is not work
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6584/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6584/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6583 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6583/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6583/comments | https://api.github.com/repos/huggingface/datasets/issues/6583/events | https://github.com/huggingface/datasets/pull/6583 | 2,077,049,491 | PR_kwDODunzps5j1DzY | 6,583 | remove eli5 test | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-11T16:05:20Z" | "2024-01-11T16:15:34Z" | "2024-01-11T16:09:24Z" | MEMBER | null | since the dataset is defunct | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6583/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6583/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6583.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6583",
"merged_at": "2024-01-11T16:09:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6583.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6583"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6582 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6582/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6582/comments | https://api.github.com/repos/huggingface/datasets/issues/6582/events | https://github.com/huggingface/datasets/pull/6582 | 2,076,072,101 | PR_kwDODunzps5jxpry | 6,582 | Fix for Incorrect ex_iterable used with multi num_worker | {
"avatar_url": "https://avatars.githubusercontent.com/u/136600500?v=4",
"events_url": "https://api.github.com/users/kq-chen/events{/privacy}",
"followers_url": "https://api.github.com/users/kq-chen/followers",
"following_url": "https://api.github.com/users/kq-chen/following{/other_user}",
"gists_url": "https://api.github.com/users/kq-chen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kq-chen",
"id": 136600500,
"login": "kq-chen",
"node_id": "U_kgDOCCRbtA",
"organizations_url": "https://api.github.com/users/kq-chen/orgs",
"received_events_url": "https://api.github.com/users/kq-chen/received_events",
"repos_url": "https://api.github.com/users/kq-chen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kq-chen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kq-chen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kq-chen"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-11T08:49:43Z" | "2024-03-01T19:09:14Z" | "2024-03-01T19:02:33Z" | CONTRIBUTOR | null | Corrects an issue where `self._ex_iterable` was erroneously used instead of `ex_iterable`, when both Distributed Data Parallel (DDP) and multi num_worker are used concurrently. This improper usage led to the generation of incorrect `shards_indices`, subsequently causing issues with the control flow responsible for worker creation. The fix ensures the appropriate iterable is used, thus providing a more accurate determination of whether a new worker should be instantiated or not. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6582/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6582/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6582.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6582",
"merged_at": "2024-03-01T19:02:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6582.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6582"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6581 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6581/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6581/comments | https://api.github.com/repos/huggingface/datasets/issues/6581/events | https://github.com/huggingface/datasets/pull/6581 | 2,075,919,265 | PR_kwDODunzps5jxIbt | 6,581 | fix os.listdir return name is empty string | {
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/d710055071",
"id": 12895488,
"login": "d710055071",
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"repos_url": "https://api.github.com/users/d710055071/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"type": "User",
"url": "https://api.github.com/users/d710055071"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-11T07:10:55Z" | "2024-01-24T10:14:43Z" | "2024-01-24T10:08:28Z" | CONTRIBUTOR | null | fix #6588
xlistdir return name is empty string
for example:
`
from datasets.download.streaming_download_manager import xjoin
from datasets.download.streaming_download_manager import xlistdir
config = DownloadConfig(storage_options=options)
manger = StreamingDownloadManager("ILSVRC2012",download_config=config)
input_path = "lakefs://datalab/main/imagenet/ILSVRC2012.zip"
download_files = manger.download_and_extract(input_path)
current_dir = xjoin(download_files,"ILSVRC2012/Images/ILSVRC2012_img_train")
folder_list = xlistdir(current_dir)
`
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6581/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6581/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6581.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6581",
"merged_at": "2024-01-24T10:08:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6581.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6581"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6580/comments | https://api.github.com/repos/huggingface/datasets/issues/6580/events | https://github.com/huggingface/datasets/issues/6580 | 2,075,645,042 | I_kwDODunzps57t9Ry | 6,580 | dataset cache only stores one config of the dataset in parquet dir, and uses that for all other configs resulting in showing same data in all configs. | {
"avatar_url": "https://avatars.githubusercontent.com/u/78641018?v=4",
"events_url": "https://api.github.com/users/kartikgupta321/events{/privacy}",
"followers_url": "https://api.github.com/users/kartikgupta321/followers",
"following_url": "https://api.github.com/users/kartikgupta321/following{/other_user}",
"gists_url": "https://api.github.com/users/kartikgupta321/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kartikgupta321",
"id": 78641018,
"login": "kartikgupta321",
"node_id": "MDQ6VXNlcjc4NjQxMDE4",
"organizations_url": "https://api.github.com/users/kartikgupta321/orgs",
"received_events_url": "https://api.github.com/users/kartikgupta321/received_events",
"repos_url": "https://api.github.com/users/kartikgupta321/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kartikgupta321/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kartikgupta321/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kartikgupta321"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-11T03:14:18Z" | "2024-01-20T12:46:16Z" | "2024-01-20T12:46:16Z" | NONE | null | ### Describe the bug
ds = load_dataset("ai2_arc", "ARC-Easy"), i have tried to force redownload, delete cache and changing the cache dir.
### Steps to reproduce the bug
dataset = []
dataset_name = "ai2_arc"
possible_configs = [
'ARC-Challenge',
'ARC-Easy'
]
for config in possible_configs:
dataset_slice = load_dataset(dataset_name, config,ignore_verifications=True,cache_dir='ai2_arc_files')
dataset.append(dataset_slice)
### Expected behavior
all configs should get saved in cache with their respective names.
### Environment info
ai2_arc | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6580/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6580/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6579 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6579/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6579/comments | https://api.github.com/repos/huggingface/datasets/issues/6579/events | https://github.com/huggingface/datasets/issues/6579 | 2,075,407,473 | I_kwDODunzps57tDRx | 6,579 | Unable to load `eli5` dataset with streaming | {
"avatar_url": "https://avatars.githubusercontent.com/u/89672451?v=4",
"events_url": "https://api.github.com/users/haok1402/events{/privacy}",
"followers_url": "https://api.github.com/users/haok1402/followers",
"following_url": "https://api.github.com/users/haok1402/following{/other_user}",
"gists_url": "https://api.github.com/users/haok1402/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/haok1402",
"id": 89672451,
"login": "haok1402",
"node_id": "MDQ6VXNlcjg5NjcyNDUx",
"organizations_url": "https://api.github.com/users/haok1402/orgs",
"received_events_url": "https://api.github.com/users/haok1402/received_events",
"repos_url": "https://api.github.com/users/haok1402/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/haok1402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haok1402/subscriptions",
"type": "User",
"url": "https://api.github.com/users/haok1402"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-10T23:44:20Z" | "2024-01-11T09:19:18Z" | "2024-01-11T09:19:17Z" | NONE | null | ### Describe the bug
Unable to load `eli5` dataset with streaming.
### Steps to reproduce the bug
This fails with FileNotFoundError: https://files.pushshift.io/reddit/submissions
```
from datasets import load_dataset
load_dataset("eli5", streaming=True)
```
This works correctly.
```
from datasets import load_dataset
load_dataset("eli5")
```
### Expected behavior
- Loading `eli5` dataset should not raise an error under the streaming mode.
- Or at the very least, show a warning that streaming mode is not supported with `eli5` dataset.
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
- `fsspec` version: 2023.6.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6579/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6579/timeline | null | not_planned | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6578 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6578/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6578/comments | https://api.github.com/repos/huggingface/datasets/issues/6578/events | https://github.com/huggingface/datasets/pull/6578 | 2,074,923,321 | PR_kwDODunzps5jtthB | 6,578 | Faster webdataset streaming | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-10T18:18:09Z" | "2024-01-30T18:46:02Z" | "2024-01-30T18:39:51Z" | MEMBER | null | requests.get(..., streaming=True) is faster than using HTTP range requests when streaming large TAR files
it can be enabled using block_size=0 in fsspec
cc @rwightman | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6578/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6578/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6578.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6578",
"merged_at": "2024-01-30T18:39:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6578.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6578"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6577 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6577/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6577/comments | https://api.github.com/repos/huggingface/datasets/issues/6577/events | https://github.com/huggingface/datasets/issues/6577 | 2,074,790,848 | I_kwDODunzps57qsvA | 6,577 | 502 Server Errors when streaming large dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanchit-gandhi",
"id": 93869735,
"login": "sanchit-gandhi",
"node_id": "U_kgDOBZhWpw",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanchit-gandhi"
} | [
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | [] | null | [
"",
""
] | "2024-01-10T16:59:36Z" | "2024-02-12T11:46:03Z" | "2024-01-15T16:05:44Z" | CONTRIBUTOR | null | ### Describe the bug
When streaming a [large ASR dataset](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set) from the Hug (~3TB) I often encounter 502 Server Errors seemingly randomly during streaming:
```
huggingface_hub.utils._errors.HfHubHTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/resolve/7d2acc5c59de848e456e951a76e805304d6fb350/train/train-00288-of-07135.parquet
```
This is despite the parquet file definitely existing on the Hub: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/blob/main/train/train-00228-of-07135.parquet
And having the correct commit id: [7d2acc5c59de848e456e951a76e805304d6fb350](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/commits/main/train)
I’m wondering whether this is coming from datasets? Or from the Hub side?
### Steps to reproduce the bug
Reproducer:
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
from tqdm import tqdm
NUM_EPOCHS = 20
dataset = load_dataset("sanchit-gandhi/concatenated-train-set", "train", streaming=True)
dataset = dataset.with_format("torch")
dataloader = DataLoader(dataset["train"], batch_size=256, drop_last=True, pin_memory=True, num_workers=16)
for epoch in tqdm(range(NUM_EPOCHS), desc="Epoch", position=0):
for batch in tqdm(dataloader, desc="Batch", position=1):
continue
```
Running the above script tends to fail within about 2 hours with a traceback like the following:
<details>
<summary> Traceback: </summary>
```python
1029 for batch in train_loader:
1030 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 630, in __next__
1031 data = self._next_data()
1032 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1325, in _next_data
1033 return self._process_data(data)
1034 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
1035 data.reraise()
1036 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/_utils.py", line 694, in reraise
1037 raise exception
1038 huggingface_hub.utils._errors.HfHubHTTPError: Caught HfHubHTTPError in DataLoader worker process 10.
1039 Original Traceback (most recent call last):
1040 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 286, in hf_raise_for_status
1041 response.raise_for_status()
1042 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status
1043 raise HTTPError(http_error_msg, response=self)
1044 requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/resolve/7d2acc5c59de848e456e951a76e805304d6fb350/train/train-00288-of-07135.parquet
1045 The above exception was the direct cause of the following exception:
1046 Traceback (most recent call last):
1047 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
1048 data = fetcher.fetch(index)
1049 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch
1050 data.append(next(self.dataset_iter))
1051 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1363, in __iter__
1052 yield from self._iter_pytorch()
1053 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1298, in _iter_pytorch
1054 for key, example in ex_iterable:
1055 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 983, in __iter__
1056 for x in self.ex_iterable:
1057 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 863, in __iter__
1058 yield from self._iter()
1059 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 900, in _iter
1060 for key, example in iterator:
1061 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 679, in __iter__
1062 yield from self._iter()
1063 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 741, in _iter
1064 for key, example in iterator:
1065 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 863, in __iter__
1066 yield from self._iter()
1067 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 900, in _iter
1068 for key, example in iterator:
1069 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1115, in __iter__
1070 for key, example in self.ex_iterable:
1071 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 679, in __iter__
1072 yield from self._iter()
1073 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 741, in _iter
1074 for key, example in iterator:
1075 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1115, in __iter__
1076 for key, example in self.ex_iterable:
1077 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 282, in __iter__
1078 for key, pa_table in self.generate_tables_fn(**self.kwargs):
1079 File "/home/sanchitgandhi/datasets/src/datasets/packaged_modules/parquet/parquet.py", line 87, in _generate_tables
1080 for batch_idx, record_batch in enumerate(
1081 File "pyarrow/_parquet.pyx", line 1367, in iter_batches
1082 File "pyarrow/types.pxi", line 88, in pyarrow.lib._datatype_to_pep3118
1083 File "/home/sanchitgandhi/datasets/src/datasets/download/streaming_download_manager.py", line 341, in read_with_retries
1084 out = read(*args, **kwargs)
1085 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/fsspec/spec.py", line 1856, in read
1086 out = self.cache._fetch(self.loc, self.loc + length)
1087 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/fsspec/caching.py", line 189, in _fetch
1088 self.cache = self.fetcher(start, end) # new block replaces old
1089 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/huggingface_hub/hf_file_system.py", line 626, in _fetch_range
1090 hf_raise_for_status(r)
1091 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 333, in hf_raise_for_status
1092 raise HfHubHTTPError(str(e), response=response) from e
1093 huggingface_hub.utils._errors.HfHubHTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/resolve/7d2acc5c59de848e456e951a76e805304d6fb350/train/train-00288-of-07135.parquet
```
</details>
### Expected behavior
Should be able to stream the dataset without any 502 error.
### Environment info
- `datasets` version: 2.16.2.dev0
- Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- `huggingface_hub` version: 0.20.1
- PyArrow version: 14.0.2
- Pandas version: 2.0.3
- `fsspec` version: 2023.10.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6577/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6577/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6576 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6576/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6576/comments | https://api.github.com/repos/huggingface/datasets/issues/6576/events | https://github.com/huggingface/datasets/issues/6576 | 2,073,710,124 | I_kwDODunzps57mk4s | 6,576 | document page 404 not found after redirection | {
"avatar_url": "https://avatars.githubusercontent.com/u/39179888?v=4",
"events_url": "https://api.github.com/users/annahung31/events{/privacy}",
"followers_url": "https://api.github.com/users/annahung31/followers",
"following_url": "https://api.github.com/users/annahung31/following{/other_user}",
"gists_url": "https://api.github.com/users/annahung31/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/annahung31",
"id": 39179888,
"login": "annahung31",
"node_id": "MDQ6VXNlcjM5MTc5ODg4",
"organizations_url": "https://api.github.com/users/annahung31/orgs",
"received_events_url": "https://api.github.com/users/annahung31/received_events",
"repos_url": "https://api.github.com/users/annahung31/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/annahung31/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/annahung31/subscriptions",
"type": "User",
"url": "https://api.github.com/users/annahung31"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-10T06:48:14Z" | "2024-01-17T14:01:31Z" | "2024-01-17T14:01:31Z" | NONE | null | ### Describe the bug
The redirected page encountered 404 not found.
### Steps to reproduce the bug
1. In this tutorial: https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt
original md: https://github.com/huggingface/course/blob/2c733c2246b8b7e0e6f19a9e5d15bb12df43b2a3/chapters/en/chapter5/4.mdx#L49
```
By default, 🤗 Datasets will decompress the files needed to load a dataset. If you want to preserve hard drive space, you can pass `DownloadConfig(delete_extracted=True)` to the `download_config` argument of `load_dataset()`. See the [documentation](https://huggingface.co/docs/datasets/package_reference/builder_classes.html?#datasets.utils.DownloadConfig) for more details.
```
The documentation points to `https://huggingface.co/docs/datasets/package_reference/builder_classes.html?#datasets.utils.DownloadConfig`
it shows `The documentation page PACKAGE_REFERENCE/BUILDER_CLASSES.HTML doesn’t exist in v2.16.1, but exists on the main version. Click [here](https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes.html) to redirect to the main version of the documentation.`
But the redirected website `https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes.html` is 404 not found.
### Expected behavior
I Guess the redirected webisite should be
`https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes` (without `.html`)
or `https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes#datasets.DownloadConfig`.
### Environment info
Datasets main | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6576/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6576/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6575 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6575/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6575/comments | https://api.github.com/repos/huggingface/datasets/issues/6575/events | https://github.com/huggingface/datasets/pull/6575 | 2,072,617,406 | PR_kwDODunzps5jl1V6 | 6,575 | [IterableDataset] Fix `drop_last_batch`in map after shuffling or sharding | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-09T15:35:31Z" | "2024-01-11T16:16:54Z" | "2024-01-11T16:10:30Z" | MEMBER | null | It was not taken into account e.g. when passing to a DataLoader with num_workers>0
Fix https://github.com/huggingface/datasets/issues/6565 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6575/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6575/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6575.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6575",
"merged_at": "2024-01-11T16:10:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6575.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6575"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6574 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6574/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6574/comments | https://api.github.com/repos/huggingface/datasets/issues/6574/events | https://github.com/huggingface/datasets/pull/6574 | 2,072,579,549 | PR_kwDODunzps5jltBC | 6,574 | Fix tests based on datasets that used to have scripts | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-09T15:16:16Z" | "2024-01-09T16:11:33Z" | "2024-01-09T16:05:13Z" | MEMBER | null | ...now that `squad` and `paws` don't have a script anymore | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6574/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6574/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6574.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6574",
"merged_at": "2024-01-09T16:05:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6574.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6574"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6573 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6573/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6573/comments | https://api.github.com/repos/huggingface/datasets/issues/6573/events | https://github.com/huggingface/datasets/pull/6573 | 2,072,553,951 | PR_kwDODunzps5jlnaj | 6,573 | [WebDataset] Audio support and bug fixes | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-09T15:03:04Z" | "2024-01-11T16:17:28Z" | "2024-01-11T16:11:04Z" | MEMBER | null | - Add audio support
- Fix an issue where user-provided features with additional fields are not taken into account
Close https://github.com/huggingface/datasets/issues/6569 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6573/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6573/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6573.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6573",
"merged_at": "2024-01-11T16:11:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6573.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6573"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6572 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6572/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6572/comments | https://api.github.com/repos/huggingface/datasets/issues/6572/events | https://github.com/huggingface/datasets/pull/6572 | 2,072,384,281 | PR_kwDODunzps5jlCO5 | 6,572 | Adding option for multipart achive download | {
"avatar_url": "https://avatars.githubusercontent.com/u/66251151?v=4",
"events_url": "https://api.github.com/users/jpodivin/events{/privacy}",
"followers_url": "https://api.github.com/users/jpodivin/followers",
"following_url": "https://api.github.com/users/jpodivin/following{/other_user}",
"gists_url": "https://api.github.com/users/jpodivin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jpodivin",
"id": 66251151,
"login": "jpodivin",
"node_id": "MDQ6VXNlcjY2MjUxMTUx",
"organizations_url": "https://api.github.com/users/jpodivin/orgs",
"received_events_url": "https://api.github.com/users/jpodivin/received_events",
"repos_url": "https://api.github.com/users/jpodivin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jpodivin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jpodivin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jpodivin"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-09T13:35:44Z" | "2024-02-25T08:13:01Z" | "2024-02-25T08:13:01Z" | NONE | null | Right now we can only download multiple separate archives or a single file archive, but not multipart archives, such as those produced by `tar --multi-volume`. This PR allows for downloading and extraction of archives split into multiple parts.
With the new `multi_part` field of the `DownloadConfig` set, the downloader will first retrieve all the files and attempt to concatenate them before starting extraction. This will obviously fail if files retrieved are actually multiple separate archives, so the option is set to `False` by default.
Tests and docs incoming. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6572/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6572/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6572.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6572",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6572.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6572"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6571 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6571/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6571/comments | https://api.github.com/repos/huggingface/datasets/issues/6571/events | https://github.com/huggingface/datasets/issues/6571 | 2,072,111,000 | I_kwDODunzps57geeY | 6,571 | Make DatasetDict.column_names return a list instead of dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"",
""
] | "2024-01-09T10:45:17Z" | "2024-01-09T10:45:17Z" | null | MEMBER | null | Currently, `DatasetDict.column_names` returns a dict, with each split name as keys and the corresponding list of column names as values.
However, by construction, all splits have the same column names.
I think it makes more sense to return a single list with the column names, which is the same for all the split keys. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6571/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6571/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6570/comments | https://api.github.com/repos/huggingface/datasets/issues/6570/events | https://github.com/huggingface/datasets/issues/6570 | 2,071,805,265 | I_kwDODunzps57fT1R | 6,570 | No online docs for 2.16 release | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | [
"",
""
] | "2024-01-09T07:43:30Z" | "2024-01-09T16:45:50Z" | "2024-01-09T16:45:50Z" | MEMBER | null | We do not have the online docs for the latest minor release 2.16 (2.16.0 nor 2.16.1).
In the online docs, the latest version appearing is 2.15.0: https://huggingface.co/docs/datasets/index
![Screenshot from 2024-01-09 08-43-08](https://github.com/huggingface/datasets/assets/8515462/83613222-867f-41f4-8833-7a4a76582f44)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6570/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6570/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6569 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6569/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6569/comments | https://api.github.com/repos/huggingface/datasets/issues/6569/events | https://github.com/huggingface/datasets/issues/6569 | 2,070,251,122 | I_kwDODunzps57ZYZy | 6,569 | WebDataset ignores features defined in YAML or passed to load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"",
""
] | "2024-01-08T11:24:21Z" | "2024-01-11T16:11:06Z" | "2024-01-11T16:11:05Z" | MEMBER | null | we should not override if the features exist already
https://github.com/huggingface/datasets/blob/d26abadce0b884db32382b92422d8a6aa997d40a/src/datasets/packaged_modules/webdataset/webdataset.py#L78-L85 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6569/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6569/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6568 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6568/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6568/comments | https://api.github.com/repos/huggingface/datasets/issues/6568/events | https://github.com/huggingface/datasets/issues/6568 | 2,069,922,151 | I_kwDODunzps57YIFn | 6,568 | keep_in_memory=True does not seem to work | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-01-08T08:03:58Z" | "2024-01-13T04:53:04Z" | null | NONE | null | UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :( | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6568/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6568/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6567 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6567/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6567/comments | https://api.github.com/repos/huggingface/datasets/issues/6567/events | https://github.com/huggingface/datasets/issues/6567 | 2,069,808,842 | I_kwDODunzps57XsbK | 6,567 | AttributeError: 'str' object has no attribute 'to' | {
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andysingal",
"id": 20493493,
"login": "andysingal",
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"repos_url": "https://api.github.com/users/andysingal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andysingal"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-08T06:40:21Z" | "2024-01-08T11:56:19Z" | "2024-01-08T10:03:17Z" | NONE | null | ### Describe the bug
```
--------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-6-80c6086794e8>](https://localhost:8080/#) in <cell line: 10>()
8 report_to="wandb")
9
---> 10 trainer = Trainer(
11 model=model,
12 args=training_args,
1 frames
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _move_model_to_device(self, model, device)
688
689 def _move_model_to_device(self, model, device):
--> 690 model = model.to(device)
691 # Moving a model to an XLA device disconnects the tied weights, so we have to retie them.
692 if self.args.parallel_mode == ParallelMode.TPU and hasattr(model, "tie_weights"):
AttributeError: 'str' object has no attribute 'to'
```
### Steps to reproduce the bug
here is the notebook:
```
https://colab.research.google.com/drive/10JDBNsLlYrQdnI2FWfDK3F5M8wvVUDXG?usp=sharing
```
### Expected behavior
run the Training
### Environment info
Colab Notebook , T4 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6567/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6567/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6566 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6566/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6566/comments | https://api.github.com/repos/huggingface/datasets/issues/6566/events | https://github.com/huggingface/datasets/issues/6566 | 2,069,495,429 | I_kwDODunzps57Wf6F | 6,566 | I train controlnet_sdxl in bf16 datatype, got unsupported ERROR in datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/25008090?v=4",
"events_url": "https://api.github.com/users/HelloWorldBeginner/events{/privacy}",
"followers_url": "https://api.github.com/users/HelloWorldBeginner/followers",
"following_url": "https://api.github.com/users/HelloWorldBeginner/following{/other_user}",
"gists_url": "https://api.github.com/users/HelloWorldBeginner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HelloWorldBeginner",
"id": 25008090,
"login": "HelloWorldBeginner",
"node_id": "MDQ6VXNlcjI1MDA4MDkw",
"organizations_url": "https://api.github.com/users/HelloWorldBeginner/orgs",
"received_events_url": "https://api.github.com/users/HelloWorldBeginner/received_events",
"repos_url": "https://api.github.com/users/HelloWorldBeginner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HelloWorldBeginner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HelloWorldBeginner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HelloWorldBeginner"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-01-08T02:37:03Z" | "2024-01-20T00:29:18Z" | null | NONE | null | ### Describe the bug
```
Traceback (most recent call last):
File "train_controlnet_sdxl.py", line 1252, in <module>
main(args)
File "train_controlnet_sdxl.py", line 1013, in main
train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3093, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3489, in _map_single
writer.write_batch(batch)
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_writer.py", line 557, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 248, in pyarrow.lib.array
File "pyarrow/array.pxi", line 113, in pyarrow.lib._handle_arrow_array_protocol
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_writer.py", line 191, in __arrow_array__
out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/features/features.py", line 447, in cast_to_python_objects
return _cast_to_python_objects(
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/features/features.py", line 324, in _cast_to_python_objects
for x in obj.detach().cpu().numpy()
TypeError: Got unsupported ScalarType BFloat16
```
### Steps to reproduce the bug
Here is my train script I use BF16 type,I use diffusers train my model
```
export MODEL_DIR="/home/mhh/sd_models/stable-diffusion-xl-base-1.0"
export OUTPUT_DIR="./control_net"
export VAE_NAME="/home/mhh/sd_models/sdxl-vae-fp16-fix"
accelerate launch train_controlnet_sdxl.py \
--pretrained_model_name_or_path=$MODEL_DIR \
--output_dir=$OUTPUT_DIR \
--pretrained_vae_model_name_or_path=$VAE_NAME \
--dataset_name=/home/mhh/sd_datasets/fusing/fill50k \
--mixed_precision="bf16" \
--resolution=1024 \
--learning_rate=1e-5 \
--max_train_steps=200 \
--validation_image "/home/mhh/sd_datasets/controlnet_image/conditioning_image_1.png" "/home/mhh/sd_datasets/controlnet_image/conditioning_image_2.png" \
--validation_prompt "red circle with blue background" "cyan circle with brown floral background" \
--validation_steps=50 \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--report_to="wandb" \
--seed=42 \
```
### Expected behavior
When I changed the data type to fp16, it worked.
### Environment info
datasets 2.16.1
numpy 1.24.4 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6566/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6566/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6565 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6565/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6565/comments | https://api.github.com/repos/huggingface/datasets/issues/6565/events | https://github.com/huggingface/datasets/issues/6565 | 2,068,939,670 | I_kwDODunzps57UYOW | 6,565 | `drop_last_batch=True` for IterableDataset map function is ignored with multiprocessing DataLoader | {
"avatar_url": "https://avatars.githubusercontent.com/u/12119806?v=4",
"events_url": "https://api.github.com/users/naba89/events{/privacy}",
"followers_url": "https://api.github.com/users/naba89/followers",
"following_url": "https://api.github.com/users/naba89/following{/other_user}",
"gists_url": "https://api.github.com/users/naba89/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/naba89",
"id": 12119806,
"login": "naba89",
"node_id": "MDQ6VXNlcjEyMTE5ODA2",
"organizations_url": "https://api.github.com/users/naba89/orgs",
"received_events_url": "https://api.github.com/users/naba89/received_events",
"repos_url": "https://api.github.com/users/naba89/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/naba89/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/naba89/subscriptions",
"type": "User",
"url": "https://api.github.com/users/naba89"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-07T02:46:50Z" | "2024-01-11T16:10:31Z" | "2024-01-11T16:10:31Z" | NONE | null | ### Describe the bug
Scenario:
- Interleaving two iterable datasets of unequal lengths (`all_exhausted`), followed by a batch mapping with batch size 2 to effectively merge the two datasets and get a sample from each dataset in a single batch, with `drop_last_batch=True` to skip the last batch in case it doesn't have two samples.
What works:
- Using DataLoader with `num_workers=0`
What does not work:
- Using DataLoader with `num_workers=1`, errors in the last batch.
Basically, `drop_last_batch=True` is ignored when using multiple dataloading workers.
Please take a look at the minimal repro script below.
### Steps to reproduce the bug
```python
from datasets import Dataset, interleave_datasets
from torch.utils.data import DataLoader
def merge_samples(batch):
assert len(batch['a']) == 2, "Batch size must be 2"
batch['c'] = [batch['a'][0]]
batch['d'] = [batch['a'][1]]
return batch
def gen1():
for ii in range(1, 8385):
yield {"a": ii}
def gen2():
for ii in range(1, 5302):
yield {"a": ii}
if __name__ == '__main__':
dataset1 = Dataset.from_generator(gen1).to_iterable_dataset(num_shards=1024)
dataset2 = Dataset.from_generator(gen2).to_iterable_dataset(num_shards=1024)
interleaved = interleave_datasets([dataset1, dataset2], stopping_strategy="all_exhausted")
mapped = interleaved.map(merge_samples, batched=True, batch_size=2, remove_columns=interleaved.column_names,
drop_last_batch=True)
# Works
loader = DataLoader(mapped, batch_size=32, num_workers=0)
i = 0
for b in loader:
print(i, b['c'].shape, b['d'].shape)
i += 1
print("DataLoader with num_workers=0 works")
# Doesn't work
loader = DataLoader(mapped, batch_size=32, num_workers=1)
i = 0
for b in loader:
print(i, b['c'].shape, b['d'].shape)
i += 1
```
### Expected behavior
`drop_last_batch=True` should have same behaviour for `num_workers=0` and `num_workers>=1`
### Environment info
- `datasets` version: 2.16.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.12
- `huggingface_hub` version: 0.20.2
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
- `fsspec` version: 2023.6.0
I have also tested on Linux and got the same behavior. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6565/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6565/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6564 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6564/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6564/comments | https://api.github.com/repos/huggingface/datasets/issues/6564/events | https://github.com/huggingface/datasets/issues/6564 | 2,068,893,194 | I_kwDODunzps57UM4K | 6,564 | `Dataset.filter` missing `with_rank` parameter | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-06T23:48:13Z" | "2024-01-29T16:36:55Z" | "2024-01-29T16:36:54Z" | NONE | null | ### Describe the bug
The issue shall be open: https://github.com/huggingface/datasets/issues/6435
When i try to pass `with_rank` to `Dataset.filter()`, i get this:
`Dataset.filter() got an unexpected keyword argument 'with_rank'`
### Steps to reproduce the bug
Run notebook:
https://colab.research.google.com/drive/1WUNKph8BdP0on5ve3gQnh_PE0cFLQqTn?usp=sharing
### Expected behavior
Should work?
### Environment info
NVIDIA RTX 4090 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6564/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6564/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6563 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6563/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6563/comments | https://api.github.com/repos/huggingface/datasets/issues/6563/events | https://github.com/huggingface/datasets/issues/6563 | 2,068,302,402 | I_kwDODunzps57R8pC | 6,563 | `ImportError`: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (.../huggingface_hub/utils/__init__.py) | {
"avatar_url": "https://avatars.githubusercontent.com/u/79070834?v=4",
"events_url": "https://api.github.com/users/wasertech/events{/privacy}",
"followers_url": "https://api.github.com/users/wasertech/followers",
"following_url": "https://api.github.com/users/wasertech/following{/other_user}",
"gists_url": "https://api.github.com/users/wasertech/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wasertech",
"id": 79070834,
"login": "wasertech",
"node_id": "MDQ6VXNlcjc5MDcwODM0",
"organizations_url": "https://api.github.com/users/wasertech/orgs",
"received_events_url": "https://api.github.com/users/wasertech/received_events",
"repos_url": "https://api.github.com/users/wasertech/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wasertech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wasertech/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wasertech"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-06T02:28:54Z" | "2024-03-14T02:59:42Z" | "2024-01-06T16:13:27Z" | NONE | null | ### Describe the bug
Yep its not [there](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/__init__.py) anymore.
```text
+ python /home/trainer/sft_train.py --model_name cognitivecomputations/dolphin-2.2.1-mistral-7b --dataset_name wasertech/OneOS --load_in_4bit --use_peft --batch_size 4 --num_train_epochs 1 --learning_rate 1.41e-5 --gradient_accumulation_steps 8 --seq_length 4096 --output_dir output --log_with wandb
Traceback (most recent call last):
File "/home/trainer/sft_train.py", line 22, in <module>
from datasets import load_dataset
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/__init__.py", line 22, in <module>
from .arrow_dataset import Dataset
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 66, in <module>
from .arrow_reader import ArrowReader
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/arrow_reader.py", line 30, in <module>
from .download.download_config import DownloadConfig
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/download/__init__.py", line 9, in <module>
from .download_manager import DownloadManager, DownloadMode
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/download/download_manager.py", line 31, in <module>
from ..utils import tqdm as hf_tqdm
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/utils/__init__.py", line 19, in <module>
from .info_utils import VerificationMode
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 5, in <module>
from huggingface_hub.utils import insecure_hashlib
ImportError: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (/home/trainer/llm-train/lib/python3.8/site-packages/huggingface_hub/utils/__init__.py)
```
### Steps to reproduce the bug
Using `datasets==2.16.1` and `huggingface_hub== 0.17.3`, load a dataset with `load_dataset`.
### Expected behavior
The dataset should be (downloaded - if needed - and) returned.
### Environment info
```text
trainer@a311ae86939e:/mnt$ pip show datasets
Name: datasets
Version: 2.16.1
Summary: HuggingFace community-driven open-source library of datasets
Home-page: https://github.com/huggingface/datasets
Author: HuggingFace Inc.
Author-email: thomas@huggingface.co
License: Apache 2.0
Location: /home/trainer/llm-train/lib/python3.8/site-packages
Requires: packaging, pyyaml, multiprocess, pyarrow-hotfix, pandas, pyarrow, xxhash, dill, numpy, aiohttp, tqdm, fsspec, requests, filelock, huggingface-hub
Required-by: trl, lm-eval, evaluate
trainer@a311ae86939e:/mnt$ pip show huggingface_hub
Name: huggingface-hub
Version: 0.17.3
Summary: Client library to download and publish models, datasets and other repos on the huggingface.co hub
Home-page: https://github.com/huggingface/huggingface_hub
Author: Hugging Face, Inc.
Author-email: julien@huggingface.co
License: Apache
Location: /home/trainer/llm-train/lib/python3.8/site-packages
Requires: requests, pyyaml, packaging, typing-extensions, tqdm, filelock, fsspec
Required-by: transformers, tokenizers, peft, evaluate, datasets, accelerate
trainer@a311ae86939e:/mnt$ huggingface-cli env
Copy-and-paste the text below in your GitHub issue.
- huggingface_hub version: 0.17.3
- Platform: Linux-6.5.13-7-MANJARO-x86_64-with-glibc2.29
- Python version: 3.8.10
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/trainer/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: wasertech
- Configured git credential helpers:
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.1.2
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: 10.2.0
- hf_transfer: N/A
- gradio: N/A
- tensorboard: N/A
- numpy: 1.24.4
- pydantic: N/A
- aiohttp: 3.9.1
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /home/trainer/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /home/trainer/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/trainer/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6563/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6563/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6562 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6562/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6562/comments | https://api.github.com/repos/huggingface/datasets/issues/6562/events | https://github.com/huggingface/datasets/issues/6562 | 2,067,904,504 | I_kwDODunzps57Qbf4 | 6,562 | datasets.DownloadMode.FORCE_REDOWNLOAD use cache to download dataset features with load_dataset function | {
"avatar_url": "https://avatars.githubusercontent.com/u/73234162?v=4",
"events_url": "https://api.github.com/users/LsTam91/events{/privacy}",
"followers_url": "https://api.github.com/users/LsTam91/followers",
"following_url": "https://api.github.com/users/LsTam91/following{/other_user}",
"gists_url": "https://api.github.com/users/LsTam91/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LsTam91",
"id": 73234162,
"login": "LsTam91",
"node_id": "MDQ6VXNlcjczMjM0MTYy",
"organizations_url": "https://api.github.com/users/LsTam91/orgs",
"received_events_url": "https://api.github.com/users/LsTam91/received_events",
"repos_url": "https://api.github.com/users/LsTam91/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LsTam91/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LsTam91/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LsTam91"
} | [] | open | false | null | [] | null | [
"",
""
] | "2024-01-05T19:10:25Z" | "2024-01-05T19:10:25Z" | null | NONE | null | ### Describe the bug
I have updated my dataset by adding a new feature, and push it to the hub. When I want to download it on my machine which contain the old version by using `datasets.load_dataset("your_dataset_name", download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` I get an error (paste bellow).
Seems that the load_dataset function still use the old features schema instead of downloading everything new from the HUB.
I find a way to go around this issue by manually deleting the old dataset cache. But from my understanding of `datasets.DownloadMode.FORCE_REDOWNLOAD` option, the dataset cache should be ignored.
### Steps to reproduce the bug
1. Download your dataset in your machine using `datasets.load_dataset`
2. Create a new feature in your dataset and push it to the hub
3. On the same machine redownload your dataset using `datasets.load_dataset("your_dataset_name", download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)`
### Expected behavior
`
ValueError: Couldn't cast
id: string
level: string
context: list<element: string>
child 0, element: string
type: string
answer: string
question: string
supporting_facts: list<element: string>
child 0, element: string
fra_answer: string
fra_question: string
-- schema metadata --
huggingface: '{"info": {"features": {"id": {"dtype": "string", "_type": "' + 490
to
{'id': Value(dtype='string', id=None), 'level': Value(dtype='string', id=None), 'context': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'type': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'supporting_facts': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
because column names don't match
The above exception was the direct cause of the following exception:
DatasetGenerationError
...
DatasetGenerationError: An error occurred while generating the dataset`
### Environment info
datasets-2.16.1 huggingface-hub-0.20.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6562/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6562/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6561 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6561/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6561/comments | https://api.github.com/repos/huggingface/datasets/issues/6561/events | https://github.com/huggingface/datasets/issues/6561 | 2,067,404,951 | I_kwDODunzps57OhiX | 6,561 | Document YAML configuration with "data_dir" | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | open | false | null | [] | null | [
"",
""
] | "2024-01-05T14:03:33Z" | "2024-01-05T14:06:18Z" | null | CONTRIBUTOR | null | See https://huggingface.co/datasets/uonlp/CulturaX/discussions/15#6597e83f185db94370d6bf50 for reference | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6561/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6561/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6560 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6560/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6560/comments | https://api.github.com/repos/huggingface/datasets/issues/6560/events | https://github.com/huggingface/datasets/issues/6560 | 2,065,637,625 | I_kwDODunzps57HyD5 | 6,560 | Support Video | {
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yuvalkirstain",
"id": 57996478,
"login": "yuvalkirstain",
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yuvalkirstain"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"",
""
] | "2024-01-04T13:10:58Z" | "2024-01-04T13:10:58Z" | null | NONE | null | ### Feature request
HF datasets are awesome in supporting text and images. Will be great to see such a support in videos :)
### Motivation
Video generation :)
### Your contribution
Will probably be limited to raising this feature request ;) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6560/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6560/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6559 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6559/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6559/comments | https://api.github.com/repos/huggingface/datasets/issues/6559/events | https://github.com/huggingface/datasets/issues/6559 | 2,065,118,332 | I_kwDODunzps57FzR8 | 6,559 | Latest version 2.16.1, when load dataset error occurs. ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default'] | {
"avatar_url": "https://avatars.githubusercontent.com/u/145004780?v=4",
"events_url": "https://api.github.com/users/zhulinJulia24/events{/privacy}",
"followers_url": "https://api.github.com/users/zhulinJulia24/followers",
"following_url": "https://api.github.com/users/zhulinJulia24/following{/other_user}",
"gists_url": "https://api.github.com/users/zhulinJulia24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhulinJulia24",
"id": 145004780,
"login": "zhulinJulia24",
"node_id": "U_kgDOCKSY7A",
"organizations_url": "https://api.github.com/users/zhulinJulia24/orgs",
"received_events_url": "https://api.github.com/users/zhulinJulia24/received_events",
"repos_url": "https://api.github.com/users/zhulinJulia24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhulinJulia24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhulinJulia24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhulinJulia24"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-04T07:04:48Z" | "2024-01-05T01:26:26Z" | "2024-01-05T01:26:25Z" | NONE | null | ### Describe the bug
python script is:
```
from datasets import load_dataset
cache_dir = 'path/to/your/cache/directory'
dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir)
```
the script success when datasets version is 2.14.7.
when using 2.16.1, error occurs
`
ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default']`
### Steps to reproduce the bug
1. pip install datasets==2.16.1
2. run python script:
```
from datasets import load_dataset
cache_dir = 'path/to/your/cache/directory'
dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir)
```
### Expected behavior
the dataset should be loaded successful in the latest version.
### Environment info
datasets 2.16.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6559/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6559/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6558 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6558/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6558/comments | https://api.github.com/repos/huggingface/datasets/issues/6558/events | https://github.com/huggingface/datasets/issues/6558 | 2,064,885,984 | I_kwDODunzps57E6jg | 6,558 | OSError: image file is truncated (1 bytes not processed) #28323 | {
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andysingal",
"id": 20493493,
"login": "andysingal",
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"repos_url": "https://api.github.com/users/andysingal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andysingal"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-04T02:15:13Z" | "2024-02-21T00:38:12Z" | "2024-02-21T00:38:12Z" | NONE | null | ### Describe the bug
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
Cell In[24], line 28
23 return example
25 # Filter the dataset
26 # filtered_dataset = dataset.filter(contains_number)
27 # Add the 'label' field in the dataset
---> 28 labeled_dataset = dataset.filter(contains_number).map(add_label)
29 # View the structure of the updated dataset
30 print(labeled_dataset)
File /usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py:975, in DatasetDict.filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc)
972 if cache_file_names is None:
973 cache_file_names = {k: None for k in self}
974 return DatasetDict(
--> 975 {
976 k: dataset.filter(
977 function=function,
978 with_indices=with_indices,
979 input_columns=input_columns,
980 batched=batched,
981 batch_size=batch_size,
982 keep_in_memory=keep_in_memory,
983 load_from_cache_file=load_from_cache_file,
984 cache_file_name=cache_file_names[k],
985 writer_batch_size=writer_batch_size,
986 fn_kwargs=fn_kwargs,
987 num_proc=num_proc,
988 desc=desc,
989 )
990 for k, dataset in self.items()
991 }
992 )
File /usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py:976, in <dictcomp>(.0)
972 if cache_file_names is None:
973 cache_file_names = {k: None for k in self}
974 return DatasetDict(
975 {
--> 976 k: dataset.filter(
977 function=function,
978 with_indices=with_indices,
979 input_columns=input_columns,
980 batched=batched,
981 batch_size=batch_size,
982 keep_in_memory=keep_in_memory,
983 load_from_cache_file=load_from_cache_file,
984 cache_file_name=cache_file_names[k],
985 writer_batch_size=writer_batch_size,
986 fn_kwargs=fn_kwargs,
987 num_proc=num_proc,
988 desc=desc,
989 )
990 for k, dataset in self.items()
991 }
992 )
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs)
550 self_format = {
551 "type": self._format_type,
552 "format_kwargs": self._format_kwargs,
553 "columns": self._format_columns,
554 "output_all_columns": self._output_all_columns,
555 }
556 # apply actual function
--> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
559 # re-apply format to the output
File /usr/local/lib/python3.10/dist-packages/datasets/fingerprint.py:481, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
477 validate_fingerprint(kwargs[fingerprint_name])
479 # Call actual function
--> 481 out = func(dataset, *args, **kwargs)
483 # Update fingerprint of in-place transforms + update in-place history of transforms
485 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3623, in Dataset.filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
3620 if len(self) == 0:
3621 return self
-> 3623 indices = self.map(
3624 function=partial(
3625 get_indices_from_mask_function, function, batched, with_indices, input_columns, self._indices
3626 ),
3627 with_indices=True,
3628 features=Features({"indices": Value("uint64")}),
3629 batched=True,
3630 batch_size=batch_size,
3631 remove_columns=self.column_names,
3632 keep_in_memory=keep_in_memory,
3633 load_from_cache_file=load_from_cache_file,
3634 cache_file_name=cache_file_name,
3635 writer_batch_size=writer_batch_size,
3636 fn_kwargs=fn_kwargs,
3637 num_proc=num_proc,
3638 suffix_template=suffix_template,
3639 new_fingerprint=new_fingerprint,
3640 input_columns=input_columns,
3641 desc=desc or "Filter",
3642 )
3643 new_dataset = copy.deepcopy(self)
3644 new_dataset._indices = indices.data
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:592, in transmit_tasks.<locals>.wrapper(*args, **kwargs)
590 self: "Dataset" = kwargs.pop("self")
591 # apply actual function
--> 592 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
593 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
594 for dataset in datasets:
595 # Remove task templates if a column mapping of the template is no longer valid
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs)
550 self_format = {
551 "type": self._format_type,
552 "format_kwargs": self._format_kwargs,
553 "columns": self._format_columns,
554 "output_all_columns": self._output_all_columns,
555 }
556 # apply actual function
--> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
559 # re-apply format to the output
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3093, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
3087 if transformed_dataset is None:
3088 with hf_tqdm(
3089 unit=" examples",
3090 total=pbar_total,
3091 desc=desc or "Map",
3092 ) as pbar:
-> 3093 for rank, done, content in Dataset._map_single(**dataset_kwargs):
3094 if done:
3095 shards_done += 1
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3470, in Dataset._map_single(shard, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
3466 indices = list(
3467 range(*(slice(i, i + batch_size).indices(shard.num_rows)))
3468 ) # Something simpler?
3469 try:
-> 3470 batch = apply_function_on_filtered_inputs(
3471 batch,
3472 indices,
3473 check_same_num_examples=len(shard.list_indexes()) > 0,
3474 offset=offset,
3475 )
3476 except NumExamplesMismatchError:
3477 raise DatasetTransformationNotAllowedError(
3478 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it."
3479 ) from None
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3349, in Dataset._map_single.<locals>.apply_function_on_filtered_inputs(pa_inputs, indices, check_same_num_examples, offset)
3347 if with_rank:
3348 additional_args += (rank,)
-> 3349 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
3350 if isinstance(processed_inputs, LazyDict):
3351 processed_inputs = {
3352 k: v for k, v in processed_inputs.data.items() if k not in processed_inputs.keys_to_format
3353 }
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:6212, in get_indices_from_mask_function(function, batched, with_indices, input_columns, indices_mapping, *args, **fn_kwargs)
6209 if input_columns is None:
6210 # inputs only contains a batch of examples
6211 batch: dict = inputs[0]
-> 6212 num_examples = len(batch[next(iter(batch.keys()))])
6213 for i in range(num_examples):
6214 example = {key: batch[key][i] for key in batch}
File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:272, in LazyDict.__getitem__(self, key)
270 value = self.data[key]
271 if key in self.keys_to_format:
--> 272 value = self.format(key)
273 self.data[key] = value
274 self.keys_to_format.remove(key)
File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:375, in LazyBatch.format(self, key)
374 def format(self, key):
--> 375 return self.formatter.format_column(self.pa_table.select([key]))
File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:442, in PythonFormatter.format_column(self, pa_table)
440 def format_column(self, pa_table: pa.Table) -> list:
441 column = self.python_arrow_extractor().extract_column(pa_table)
--> 442 column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
443 return column
File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:218, in PythonFeaturesDecoder.decode_column(self, column, column_name)
217 def decode_column(self, column: list, column_name: str) -> list:
--> 218 return self.features.decode_column(column, column_name) if self.features else column
File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1951, in Features.decode_column(self, column, column_name)
1938 def decode_column(self, column: list, column_name: str):
1939 """Decode column with custom feature decoding.
1940
1941 Args:
(...)
1948 `list[Any]`
1949 """
1950 return (
-> 1951 [decode_nested_example(self[column_name], value) if value is not None else None for value in column]
1952 if self._column_requires_decoding[column_name]
1953 else column
1954 )
File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1951, in <listcomp>(.0)
1938 def decode_column(self, column: list, column_name: str):
1939 """Decode column with custom feature decoding.
1940
1941 Args:
(...)
1948 `list[Any]`
1949 """
1950 return (
-> 1951 [decode_nested_example(self[column_name], value) if value is not None else None for value in column]
1952 if self._column_requires_decoding[column_name]
1953 else column
1954 )
File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1339, in decode_nested_example(schema, obj, token_per_repo_id)
1336 elif isinstance(schema, (Audio, Image)):
1337 # we pass the token to read and decode files from private repositories in streaming mode
1338 if obj is not None and schema.decode:
-> 1339 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1340 return obj
File /usr/local/lib/python3.10/dist-packages/datasets/features/image.py:185, in Image.decode_example(self, value, token_per_repo_id)
183 else:
184 image = PIL.Image.open(BytesIO(bytes_))
--> 185 image.load() # to avoid "Too many open files" errors
186 return image
File /usr/local/lib/python3.10/dist-packages/PIL/ImageFile.py:254, in ImageFile.load(self)
252 break
253 else:
--> 254 raise OSError(
255 "image file is truncated "
256 f"({len(b)} bytes not processed)"
257 )
259 b = b + s
260 n, err_code = decoder.decode(b)
OSError: image file is truncated (1 bytes not processed)
```
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("mehul7/captioned_military_aircraft")
from transformers import AutoImageProcessor
checkpoint = "microsoft/resnet-50"
image_processor = AutoImageProcessor.from_pretrained(checkpoint)
import re
from PIL import Image
import io
def contains_number(example):
try:
image = Image.open(io.BytesIO(example["image"]['bytes']))
t = image_processor(images=image, return_tensors="pt")['pixel_values']
except Exception as e:
print(f"Error processing image:{example['text']}")
return False
return bool(re.search(r'\d', example['text']))
# Define a function to add the 'label' field
def add_label(example):
lab = example['text'].split()
temp = 'NOT'
for item in lab:
if str(item[-1]).isdigit():
temp = item
break
example['label'] = temp
return example
# Filter the dataset
# filtered_dataset = dataset.filter(contains_number)
# Add the 'label' field in the dataset
labeled_dataset = dataset.filter(contains_number).map(add_label)
# View the structure of the updated dataset
print(labeled_dataset)
```
### Expected behavior
needs to form labels
same as : https://www.kaggle.com/code/jiabaowangts/dataset-air/notebook
### Environment info
Kaggle notebook P100 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6558/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6558/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6557 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6557/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6557/comments | https://api.github.com/repos/huggingface/datasets/issues/6557/events | https://github.com/huggingface/datasets/pull/6557 | 2,064,341,965 | PR_kwDODunzps5jJ63z | 6,557 | Support standalone yaml | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-03T16:47:35Z" | "2024-01-11T17:59:51Z" | "2024-01-11T17:53:42Z" | MEMBER | null | see (internal) https://huggingface.slack.com/archives/C02V51Q3800/p1703885853581679 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6557/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6557/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6557.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6557",
"merged_at": "2024-01-11T17:53:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6557.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6557"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6556 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6556/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6556/comments | https://api.github.com/repos/huggingface/datasets/issues/6556/events | https://github.com/huggingface/datasets/pull/6556 | 2,064,018,208 | PR_kwDODunzps5jI0nN | 6,556 | Fix imagefolder with one image | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-03T13:13:02Z" | "2024-02-12T21:57:34Z" | "2024-01-09T13:06:30Z" | MEMBER | null | A dataset repository with one image and one metadata file was considered a JSON dataset instead of an ImageFolder dataset. This is because we pick the dataset type with the most compatible data file extensions present in the repository and it results in a tie in this case.
e.g. for https://huggingface.co/datasets/multimodalart/repro_1_image
I fixed this by deprioritizing metadata files in the count.
fix https://github.com/huggingface/datasets/issues/6545 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6556/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6556/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6556.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6556",
"merged_at": "2024-01-09T13:06:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6556.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6556"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6555 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6555/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6555/comments | https://api.github.com/repos/huggingface/datasets/issues/6555/events | https://github.com/huggingface/datasets/pull/6555 | 2,063,841,286 | PR_kwDODunzps5jIM79 | 6,555 | Do not use Parquet exports if revision is passed | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"",
""
] | "2024-01-03T11:33:10Z" | "2024-02-02T10:41:33Z" | "2024-02-02T10:35:28Z" | MEMBER | null | Fix #6554. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6555/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6555/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6555.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6555",
"merged_at": "2024-02-02T10:35:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6555.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6555"
} | true |