url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.04B
| node_id
stringlengths 18
32
| number
int64 1
6.5k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | comments
list | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5985
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5985/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5985/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5985/events
|
https://github.com/huggingface/datasets/issues/5985
| 1,771,588,158 |
I_kwDODunzps5pmEo-
| 5,985 |
Cannot reuse tokenizer object for dataset map
|
{
"login": "vikigenius",
"id": 12724810,
"node_id": "MDQ6VXNlcjEyNzI0ODEw",
"avatar_url": "https://avatars.githubusercontent.com/u/12724810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vikigenius",
"html_url": "https://github.com/vikigenius",
"followers_url": "https://api.github.com/users/vikigenius/followers",
"following_url": "https://api.github.com/users/vikigenius/following{/other_user}",
"gists_url": "https://api.github.com/users/vikigenius/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vikigenius/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikigenius/subscriptions",
"organizations_url": "https://api.github.com/users/vikigenius/orgs",
"repos_url": "https://api.github.com/users/vikigenius/repos",
"events_url": "https://api.github.com/users/vikigenius/events{/privacy}",
"received_events_url": "https://api.github.com/users/vikigenius/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] |
closed
| false | null |
[] |
[
"This is a known issue: https://github.com/huggingface/datasets/issues/3847.\r\n\r\nFixing this requires significant work - rewriting the `tokenizers` lib to make them immutable.\r\n\r\nThe current solution is to pass `cache_file_name` to `map` to use that file for caching or calling a tokenizer before `map` (with the same set of parameters as the ones in the map transform)",
"Closing since this is a duplicate"
] | 2023-06-23T14:45:31 | 2023-07-21T14:09:14 | 2023-07-21T14:09:14 |
NONE
| null | null | null |
### Describe the bug
Related to https://github.com/huggingface/transformers/issues/24441. Not sure if this is a tokenizer issue or caching issue, so filing in both.
Passing the tokenizer to the dataset map function causes the tokenizer to be fingerprinted weirdly. After calling the tokenizer with arguments like padding and truncation the tokenizer object changes interanally, even though the hash remains the same.
But dumps is able to detect that internal change which causes the tokenizer object's fingerprint to change.
### Steps to reproduce the bug
```python
from transformers import AutoTokenizer
from datasets.utils.py_utils import dumps # Huggingface datasets
t = AutoTokenizer.from_pretrained('bert-base-uncased')
t.save_pretrained("tok1")
th1 = hash(dumps(t))
text = "This is an example text"
ttext = t(text, max_length=512, padding="max_length", truncation=True)
t.save_pretrained("tok2")
th2 = hash(dumps(t))
assert th1 == th2 # Assertion Error
```
But if you use just the hash of the object without dumps, the hashes don't change
```python
from transformers import AutoTokenizer
from datasets.utils.py_utils import dumps # Huggingface datasets
t = AutoTokenizer.from_pretrained('bert-base-uncased')
th1 = hash(t) # Just hash no dumps
text = "This is an example text"
ttext = t(text, max_length=512, padding="max_length", truncation=True)
th2 = hash(t) # Just hash no dumps
assert th1 == th2 # This is OK
```
This causes situations such as the following
1. Create a text file like this `yes "This is an example text" | head -n 10000 > lines.txt`
```python
from transformers import AutoTokenizer
import datasets
class TokenizeMapper(object):
"""Mapper for tokenizer.
This is needed because the caching mechanism of HuggingFace does not work on
lambdas. Each time a new lambda will be created by a new process which will
lead to a different hash.
This way we can have a universal mapper object in init and reuse it with the same
hash for each process.
"""
def __init__(self, tokenizer):
"""Initialize the tokenizer."""
self.tokenizer = tokenizer
def __call__(self, examples, **kwargs):
"""Run the mapper."""
texts = examples["text"]
tt = self.tokenizer(texts, max_length=256, padding="max_length", truncation=True)
batch_outputs = {
"input_ids": tt.input_ids,
"attention_mask": tt.attention_mask,
}
return batch_outputs
t = AutoTokenizer.from_pretrained('bert-base-uncased')
mapper = TokenizeMapper(t)
ds = datasets.load_dataset("text", data_files="lines.txt")
mds1 = ds.map(
mapper,
batched=False,
remove_columns=["text"],
).with_format("torch")
mds2 = ds.map(
mapper,
batched=False,
remove_columns=["text"],
).with_format("torch")
```
The second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps.
### Expected behavior
We should be able to initialize a tokenizer. And reusing it should let us reuse the same map computation for the same dataset.
The second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-6.1.31_1-x86_64-with-glibc2.36
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5985/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5985/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5984
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5984/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5984/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5984/events
|
https://github.com/huggingface/datasets/issues/5984
| 1,771,571,458 |
I_kwDODunzps5pmAkC
| 5,984 |
AutoSharding IterableDataset's when num_workers > 1
|
{
"login": "mathephysicist",
"id": 25594384,
"node_id": "MDQ6VXNlcjI1NTk0Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/25594384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mathephysicist",
"html_url": "https://github.com/mathephysicist",
"followers_url": "https://api.github.com/users/mathephysicist/followers",
"following_url": "https://api.github.com/users/mathephysicist/following{/other_user}",
"gists_url": "https://api.github.com/users/mathephysicist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mathephysicist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mathephysicist/subscriptions",
"organizations_url": "https://api.github.com/users/mathephysicist/orgs",
"repos_url": "https://api.github.com/users/mathephysicist/repos",
"events_url": "https://api.github.com/users/mathephysicist/events{/privacy}",
"received_events_url": "https://api.github.com/users/mathephysicist/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] |
[
"For this to be possible, we would have to switch from the \"Streaming\" Arrow format to the \"Random Access\" (IPC/Feather) format, which allows reading arbitrary record batches (explained [here](https://arrow.apache.org/docs/python/ipc.html)). We could then use these batches to construct shards.\r\n\r\n@lhoestq @albertvillanova Do you think this use case is worth the switch? Also, we currently shard files, not inner row groups/chunks. Should we also support sharding row groups (e.g. if the number of input files is 1)?\r\n\r\nPS: I don't expect significant speed-up for local, uncompressed Arrow files.",
"Alternatively we could support multiprocessing map for iterable datasets and let the user do the CPU intensive task there ?\r\n\r\nThis way it would work on arrow data but also on any iterable dataset",
"> For this to be possible, we would have to switch from the \"Streaming\" Arrow format to the \"Random Access\" (IPC/Feather) format, which allows reading arbitrary record batches (explained [here](https://arrow.apache.org/docs/python/ipc.html)). We could then use these batches to construct shards.\r\n> \r\n> @lhoestq @albertvillanova Do you think this use case is worth the switch? Also, we currently shard files, not inner row groups/chunks. Should we also support sharding row groups (e.g. if the number of input files is 1)?\r\n> \r\n> PS: I don't expect significant speed-up for local, uncompressed Arrow files.\r\n\r\nCould you explain why you'd need to change the arrow format?\r\n\r\nWhen we use streaming datasets we simply determine the number of worker shards and then add some modulo logic at the appropriate place. Worst case scenario, you'd skip streaming entries according to the number of shards.\r\n\r\nFor PyTorch, I'd be happy to provide an implementation or a sketch thereof, if you point me toward what the testing requirements would be for such a PR.",
"> Could you explain why you'd need to change the arrow format?\r\n\r\nThis way workers have random access to the location of the file where its dataset subset starts. Currently we're using the Arrow streaming format which doesn't include the metadata of the record batches offsets. This is needed here to efficiently split a dataset made of one single file.",
"> > Could you explain why you'd need to change the arrow format?\r\n> \r\n> This way workers have random access to the location of the file where its dataset subset starts. Currently we're using the Arrow streaming format which doesn't include the metadata of the record batches offsets. This is needed here to efficiently split a dataset made of one single file.\r\n\r\nI guess I don't understand why you'd need to subset the dataset in the first place. \r\nIt seems sufficient to figure out how to offset or skip rows.\r\n\r\nFor instance, using pyArrow, you could use RecordBatchStreamReader to zero-copy iterate over records with read_next_batch and then only initiate the next step for records modulo worker shard.\r\nThat's one way to do it, where of course you'd need to account for gpu sharding as well.\r\n\r\n\r\nOtherwise, how did you implement worker/node/GPU sharding for iterable/streaming data where you do not have index information or prior splits (e.g. files)?",
"> For instance, using pyArrow, you could use RecordBatchStreamReader to zero-copy iterate over records with read_next_batch and then only initiate the next step for records modulo worker shard.\r\n\r\nThat works indeed ! And what we meant is that you can make it even faster to instantiate. Indeed using RecordBatchStreamReader you need to get the list of all the record batches in each worker, whereas you could just get the list of record batches per worker if you use the record batches locations in the Arrow IPC file footer. This would be especially appreciated to have a fast instantiation in case you have tens of thousands of Arrow files for example.",
"Any recent updates on this ? "
] | 2023-06-23T14:34:20 | 2023-12-08T09:04:04 | null |
NONE
| null | null | null |
### Feature request
Minimal Example
```
import torch
from datasets import IterableDataset
d = IterableDataset.from_file(<file_name>)
dl = torch.utils.data.dataloader.DataLoader(d,num_workers=3)
for sample in dl:
print(sample)
```
Warning:
Too many dataloader workers: 2 (max is dataset.n_shards=1). Stopping 1 dataloader workers.
To parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary to have a number of workers greater than dataset.n_shards=1. To enable more parallelism, please split the dataset in more files than 1.
Expected Behavior:
Dataset is sharded each cpu uses subset (contiguously - so you can do checkpoint loading/saving)
### Motivation
I have a lot of unused cpu's and would like to be able to shard iterable datasets with pytorch's dataloader when num_workers > 1. This is for a very large single file. I am aware that we can use the `split_dataset_by_node` to ensure that each node (for distributed) gets different shards, but we should extend it so that this also continues for multiple workers.
### Your contribution
If someone points me to what needs to change, I can create a PR.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5984/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5983
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5983/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5983/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5983/events
|
https://github.com/huggingface/datasets/pull/5983
| 1,770,578,804 |
PR_kwDODunzps5TtDdy
| 5,983 |
replaced PathLike as a variable for save_to_disk for dataset_path wit…
|
{
"login": "benjaminbrown038",
"id": 35114142,
"node_id": "MDQ6VXNlcjM1MTE0MTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benjaminbrown038",
"html_url": "https://github.com/benjaminbrown038",
"followers_url": "https://api.github.com/users/benjaminbrown038/followers",
"following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}",
"gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions",
"organizations_url": "https://api.github.com/users/benjaminbrown038/orgs",
"repos_url": "https://api.github.com/users/benjaminbrown038/repos",
"events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}",
"received_events_url": "https://api.github.com/users/benjaminbrown038/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 2023-06-23T00:57:05 | 2023-09-11T04:17:17 | 2023-09-11T04:17:17 |
NONE
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5983",
"html_url": "https://github.com/huggingface/datasets/pull/5983",
"diff_url": "https://github.com/huggingface/datasets/pull/5983.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5983.patch",
"merged_at": null
}
|
…h str like that of load_from_disk
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5983/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5982
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5982/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5982/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5982/events
|
https://github.com/huggingface/datasets/issues/5982
| 1,770,333,296 |
I_kwDODunzps5phSRw
| 5,982 |
404 on Datasets Documentation Page
|
{
"login": "kmulka-bloomberg",
"id": 118509387,
"node_id": "U_kgDOBxBPSw",
"avatar_url": "https://avatars.githubusercontent.com/u/118509387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kmulka-bloomberg",
"html_url": "https://github.com/kmulka-bloomberg",
"followers_url": "https://api.github.com/users/kmulka-bloomberg/followers",
"following_url": "https://api.github.com/users/kmulka-bloomberg/following{/other_user}",
"gists_url": "https://api.github.com/users/kmulka-bloomberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kmulka-bloomberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kmulka-bloomberg/subscriptions",
"organizations_url": "https://api.github.com/users/kmulka-bloomberg/orgs",
"repos_url": "https://api.github.com/users/kmulka-bloomberg/repos",
"events_url": "https://api.github.com/users/kmulka-bloomberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/kmulka-bloomberg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This wasn’t working for me a bit earlier, but it looks to be back up now",
"We had a minor issue updating the docs after the latest release. It should work now :)."
] | 2023-06-22T20:14:57 | 2023-06-26T15:45:03 | 2023-06-26T15:45:03 |
NONE
| null | null | null |
### Describe the bug
Getting a 404 from the Hugging Face Datasets docs page:
https://huggingface.co/docs/datasets/index
### Steps to reproduce the bug
1. Go to URL https://huggingface.co/docs/datasets/index
2. Notice 404 not found
### Expected behavior
URL should either show docs or redirect to new location
### Environment info
hugginface.co
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5982/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5982/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5981
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5981/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5981/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5981/events
|
https://github.com/huggingface/datasets/issues/5981
| 1,770,310,087 |
I_kwDODunzps5phMnH
| 5,981 |
Only two cores are getting used in sagemaker with pytorch 3.10 kernel
|
{
"login": "mmr-crexi",
"id": 107141022,
"node_id": "U_kgDOBmLXng",
"avatar_url": "https://avatars.githubusercontent.com/u/107141022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmr-crexi",
"html_url": "https://github.com/mmr-crexi",
"followers_url": "https://api.github.com/users/mmr-crexi/followers",
"following_url": "https://api.github.com/users/mmr-crexi/following{/other_user}",
"gists_url": "https://api.github.com/users/mmr-crexi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmr-crexi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmr-crexi/subscriptions",
"organizations_url": "https://api.github.com/users/mmr-crexi/orgs",
"repos_url": "https://api.github.com/users/mmr-crexi/repos",
"events_url": "https://api.github.com/users/mmr-crexi/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmr-crexi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I think it's more likely that this issue is related to PyTorch than Datasets, as PyTorch (on import) registers functions to execute when forking a process. Maybe this is the culprit: https://github.com/pytorch/pytorch/issues/99625",
"From reading that ticket, it may be down in mkl? Is it worth hotfixing in the meantime, with the express intention of turning it off? I know that's a horribly crufty solution, but it's also deeply frustrating to be limited to 2 cores for operations as simple as filtration.",
"This is too specific and unrelated to `datasets`, so this shouldn't be fixed here.",
"@mariosasko @mmr-crexi I had the exact same problem on my kubernetes cluster. the datasets subprocess only user 1 and 17 core"
] | 2023-06-22T19:57:31 | 2023-10-30T06:17:40 | 2023-07-24T11:54:52 |
NONE
| null | null | null |
### Describe the bug
When using the newer pytorch 3.10 kernel, only 2 cores are being used by huggingface filter and map functions. The Pytorch 3.9 kernel would use as many cores as specified in the num_proc field.
We have solved this in our own code by placing the following snippet in the code that is called inside subprocesses:
```os.sched_setaffinity(0, {i for i in range(1000)})```
The problem, as near as we can tell, us that once upon a time, cpu affinity was set using a bitmask ("0xfffff" and the like), and affinity recently changed to a list of processors rather than to using the mask. As such, only processors 1 and 17 are shown to be working in htop.

When running functions via `map`, the above resetting of affinity works to spread across the cores. When using `filter`, however, only two cores are active.
### Steps to reproduce the bug
Repro steps:
1. Create an aws sagemaker instance
2. use the pytorch 3_10 kernel
3. Load a dataset
4. run a filter operation
5. watch as only 2 cores are used when num_proc > 2
6. run a map operation
7. watch as only 2 cores are used when num_proc > 2
8. run a map operation with processor affinity reset inside the function called via map
9. Watch as all cores run
### Expected behavior
All specified cores are used via the num_proc argument.
### Environment info
AWS sagemaker with the following init script run in the terminal after instance creation:
conda init bash
bash
conda activate pytorch_p310
pip install Wand PyPDF pytesseract datasets seqeval pdfplumber transformers pymupdf sentencepiece timm donut-python accelerate optimum xgboost
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
sudo yum -y install htop
sudo yum -y update
sudo yum -y install wget libstdc++ autoconf automake libtool autoconf-archive pkg-config gcc gcc-c++ make libjpeg-devel libpng-devel libtiff-devel zlib-devel
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5981/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5980
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5980/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5980/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5980/events
|
https://github.com/huggingface/datasets/issues/5980
| 1,770,255,973 |
I_kwDODunzps5pg_Zl
| 5,980 |
Viewing dataset card returns “502 Bad Gateway”
|
{
"login": "tbenthompson",
"id": 4241811,
"node_id": "MDQ6VXNlcjQyNDE4MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4241811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tbenthompson",
"html_url": "https://github.com/tbenthompson",
"followers_url": "https://api.github.com/users/tbenthompson/followers",
"following_url": "https://api.github.com/users/tbenthompson/following{/other_user}",
"gists_url": "https://api.github.com/users/tbenthompson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tbenthompson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tbenthompson/subscriptions",
"organizations_url": "https://api.github.com/users/tbenthompson/orgs",
"repos_url": "https://api.github.com/users/tbenthompson/repos",
"events_url": "https://api.github.com/users/tbenthompson/events{/privacy}",
"received_events_url": "https://api.github.com/users/tbenthompson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Can you try again? Maybe there was a minor outage.",
"Yes, it seems to be working now. In case it's helpful, the outage lasted several days. It was failing as late as yesterday morning. ",
"we fixed something on the server side, glad it's fixed now"
] | 2023-06-22T19:14:48 | 2023-06-27T08:38:19 | 2023-06-26T14:42:45 |
NONE
| null | null | null |
The url is: https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams
I am able to successfully view the “Files and versions” tab: [Confirm-Labs/pile_ngrams_trigrams at main](https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams/tree/main)
Any help would be appreciated! Thanks! I hope this is the right place to report an issue like this.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5980/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5979
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5979/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5979/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5979/events
|
https://github.com/huggingface/datasets/pull/5979
| 1,770,198,250 |
PR_kwDODunzps5TrxS_
| 5,979 |
set dev version
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5979). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008087 / 0.011353 (-0.003266) | 0.004691 / 0.011008 (-0.006317) | 0.121545 / 0.038508 (0.083037) | 0.057436 / 0.023109 (0.034326) | 0.368864 / 0.275898 (0.092966) | 0.457199 / 0.323480 (0.133719) | 0.006745 / 0.007986 (-0.001241) | 0.003689 / 0.004328 (-0.000640) | 0.090480 / 0.004250 (0.086229) | 0.071368 / 0.037052 (0.034316) | 0.372788 / 0.258489 (0.114299) | 0.429894 / 0.293841 (0.136053) | 0.037544 / 0.128546 (-0.091002) | 0.010142 / 0.075646 (-0.065505) | 0.420467 / 0.419271 (0.001196) | 0.064359 / 0.043533 (0.020826) | 0.370345 / 0.255139 (0.115206) | 0.405220 / 0.283200 (0.122020) | 0.028410 / 0.141683 (-0.113273) | 1.824845 / 1.452155 (0.372690) | 1.888109 / 1.492716 (0.395392) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234585 / 0.018006 (0.216578) | 0.499965 / 0.000490 (0.499476) | 0.000461 / 0.000200 (0.000261) | 0.000064 / 0.000054 (0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032294 / 0.037411 (-0.005117) | 0.131769 / 0.014526 (0.117243) | 0.146472 / 0.176557 (-0.030085) | 0.210035 / 0.737135 (-0.527100) | 0.145600 / 0.296338 (-0.150739) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.507455 / 0.215209 (0.292246) | 5.080090 / 2.077655 (3.002435) | 2.506104 / 1.504120 (1.001984) | 2.297655 / 1.541195 (0.756460) | 2.324920 / 1.468490 (0.856430) | 0.645003 / 4.584777 (-3.939774) | 4.677856 / 3.745712 (0.932144) | 2.254179 / 5.269862 (-3.015683) | 1.280663 / 4.565676 (-3.285013) | 0.078809 / 0.424275 (-0.345466) | 0.014059 / 0.007607 (0.006452) | 0.628053 / 0.226044 (0.402009) | 6.327289 / 2.268929 (4.058360) | 2.957918 / 55.444624 (-52.486706) | 2.571568 / 6.876477 (-4.304909) | 2.708766 / 2.142072 (0.566694) | 0.772868 / 4.805227 (-4.032360) | 0.164835 / 6.500664 (-6.335829) | 0.075334 / 0.075469 (-0.000135) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.471930 / 1.841788 (-0.369858) | 17.917340 / 8.074308 (9.843032) | 15.719327 / 10.191392 (5.527935) | 0.191999 / 0.680424 (-0.488424) | 0.022464 / 0.534201 (-0.511737) | 0.511038 / 0.579283 (-0.068245) | 0.512050 / 0.434364 (0.077686) | 0.608711 / 0.540337 (0.068373) | 0.749660 / 1.386936 (-0.637276) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008028 / 0.011353 (-0.003325) | 0.004908 / 0.011008 (-0.006100) | 0.092294 / 0.038508 (0.053786) | 0.053051 / 0.023109 (0.029942) | 0.453862 / 0.275898 (0.177964) | 0.512548 / 0.323480 (0.189068) | 0.004817 / 0.007986 (-0.003168) | 0.005330 / 0.004328 (0.001002) | 0.095600 / 0.004250 (0.091350) | 0.068763 / 0.037052 (0.031710) | 0.453654 / 0.258489 (0.195165) | 0.504995 / 0.293841 (0.211154) | 0.038123 / 0.128546 (-0.090423) | 0.010650 / 0.075646 (-0.064996) | 0.102854 / 0.419271 (-0.316417) | 0.062973 / 0.043533 (0.019440) | 0.430420 / 0.255139 (0.175281) | 0.465448 / 0.283200 (0.182248) | 0.029736 / 0.141683 (-0.111947) | 1.844225 / 1.452155 (0.392070) | 1.934685 / 1.492716 (0.441968) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227797 / 0.018006 (0.209791) | 0.467868 / 0.000490 (0.467378) | 0.004531 / 0.000200 (0.004331) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035632 / 0.037411 (-0.001780) | 0.145943 / 0.014526 (0.131417) | 0.151944 / 0.176557 (-0.024613) | 0.220519 / 0.737135 (-0.516616) | 0.159732 / 0.296338 (-0.136606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.520641 / 0.215209 (0.305432) | 5.184740 / 2.077655 (3.107086) | 2.538751 / 1.504120 (1.034631) | 2.316571 / 1.541195 (0.775377) | 2.387898 / 1.468490 (0.919408) | 0.614515 / 4.584777 (-3.970262) | 4.573142 / 3.745712 (0.827430) | 4.657052 / 5.269862 (-0.612809) | 2.159664 / 4.565676 (-2.406013) | 0.079713 / 0.424275 (-0.344562) | 0.014462 / 0.007607 (0.006855) | 0.656611 / 0.226044 (0.430566) | 6.481630 / 2.268929 (4.212702) | 3.135047 / 55.444624 (-52.309577) | 2.757502 / 6.876477 (-4.118975) | 2.851488 / 2.142072 (0.709415) | 0.790795 / 4.805227 (-4.014432) | 0.172358 / 6.500664 (-6.328306) | 0.080255 / 0.075469 (0.004786) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.571391 / 1.841788 (-0.270396) | 19.025224 / 8.074308 (10.950916) | 17.079230 / 10.191392 (6.887838) | 0.172823 / 0.680424 (-0.507601) | 0.021845 / 0.534201 (-0.512356) | 0.522286 / 0.579283 (-0.056998) | 0.510406 / 0.434364 (0.076042) | 0.604830 / 0.540337 (0.064493) | 0.735466 / 1.386936 (-0.651471) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010025 / 0.011353 (-0.001328) | 0.005699 / 0.011008 (-0.005310) | 0.134194 / 0.038508 (0.095686) | 0.056154 / 0.023109 (0.033045) | 0.470091 / 0.275898 (0.194193) | 0.539225 / 0.323480 (0.215745) | 0.006659 / 0.007986 (-0.001326) | 0.004468 / 0.004328 (0.000140) | 0.110040 / 0.004250 (0.105790) | 0.074172 / 0.037052 (0.037119) | 0.497450 / 0.258489 (0.238961) | 0.535048 / 0.293841 (0.241207) | 0.051195 / 0.128546 (-0.077352) | 0.014926 / 0.075646 (-0.060721) | 0.461334 / 0.419271 (0.042062) | 0.073773 / 0.043533 (0.030240) | 0.450741 / 0.255139 (0.195602) | 0.474853 / 0.283200 (0.191653) | 0.036372 / 0.141683 (-0.105311) | 1.982873 / 1.452155 (0.530719) | 1.989912 / 1.492716 (0.497196) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287817 / 0.018006 (0.269811) | 0.613415 / 0.000490 (0.612926) | 0.007082 / 0.000200 (0.006882) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031119 / 0.037411 (-0.006292) | 0.129886 / 0.014526 (0.115361) | 0.143492 / 0.176557 (-0.033065) | 0.208536 / 0.737135 (-0.528600) | 0.147081 / 0.296338 (-0.149257) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.668312 / 0.215209 (0.453103) | 6.568609 / 2.077655 (4.490955) | 2.708788 / 1.504120 (1.204668) | 2.366737 / 1.541195 (0.825542) | 2.392598 / 1.468490 (0.924108) | 0.967582 / 4.584777 (-3.617195) | 5.582743 / 3.745712 (1.837031) | 3.021607 / 5.269862 (-2.248255) | 1.866402 / 4.565676 (-2.699275) | 0.115998 / 0.424275 (-0.308277) | 0.015571 / 0.007607 (0.007964) | 0.820069 / 0.226044 (0.594025) | 8.229725 / 2.268929 (5.960797) | 3.437068 / 55.444624 (-52.007557) | 2.902312 / 6.876477 (-3.974164) | 3.025874 / 2.142072 (0.883802) | 1.230359 / 4.805227 (-3.574868) | 0.237341 / 6.500664 (-6.263323) | 0.089923 / 0.075469 (0.014453) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.670970 / 1.841788 (-0.170818) | 19.667167 / 8.074308 (11.592859) | 21.624423 / 10.191392 (11.433031) | 0.231683 / 0.680424 (-0.448741) | 0.029145 / 0.534201 (-0.505056) | 0.543441 / 0.579283 (-0.035842) | 0.617510 / 0.434364 (0.183146) | 0.612662 / 0.540337 (0.072324) | 0.790589 / 1.386936 (-0.596347) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010324 / 0.011353 (-0.001029) | 0.005339 / 0.011008 (-0.005669) | 0.104762 / 0.038508 (0.066254) | 0.052631 / 0.023109 (0.029522) | 0.485864 / 0.275898 (0.209966) | 0.595768 / 0.323480 (0.272288) | 0.007417 / 0.007986 (-0.000569) | 0.005229 / 0.004328 (0.000900) | 0.100775 / 0.004250 (0.096524) | 0.067144 / 0.037052 (0.030092) | 0.522269 / 0.258489 (0.263780) | 0.592597 / 0.293841 (0.298756) | 0.051101 / 0.128546 (-0.077446) | 0.015277 / 0.075646 (-0.060369) | 0.115530 / 0.419271 (-0.303741) | 0.071922 / 0.043533 (0.028390) | 0.490208 / 0.255139 (0.235069) | 0.578936 / 0.283200 (0.295736) | 0.040382 / 0.141683 (-0.101301) | 1.986059 / 1.452155 (0.533904) | 2.040600 / 1.492716 (0.547883) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300399 / 0.018006 (0.282393) | 0.624702 / 0.000490 (0.624212) | 0.004908 / 0.000200 (0.004708) | 0.000155 / 0.000054 (0.000100) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038031 / 0.037411 (0.000619) | 0.140353 / 0.014526 (0.125828) | 0.152600 / 0.176557 (-0.023956) | 0.219165 / 0.737135 (-0.517970) | 0.154232 / 0.296338 (-0.142106) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.698855 / 0.215209 (0.483646) | 7.125543 / 2.077655 (5.047889) | 3.251222 / 1.504120 (1.747102) | 2.953404 / 1.541195 (1.412209) | 3.051108 / 1.468490 (1.582618) | 0.962068 / 4.584777 (-3.622709) | 5.789579 / 3.745712 (2.043867) | 5.193271 / 5.269862 (-0.076591) | 2.757886 / 4.565676 (-1.807790) | 0.111865 / 0.424275 (-0.312410) | 0.014684 / 0.007607 (0.007077) | 0.875967 / 0.226044 (0.649923) | 8.818359 / 2.268929 (6.549430) | 4.165216 / 55.444624 (-51.279408) | 3.372059 / 6.876477 (-3.504418) | 3.486886 / 2.142072 (1.344813) | 1.232276 / 4.805227 (-3.572951) | 0.238967 / 6.500664 (-6.261697) | 0.091584 / 0.075469 (0.016115) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.850755 / 1.841788 (0.008968) | 20.058756 / 8.074308 (11.984448) | 23.761271 / 10.191392 (13.569879) | 0.231826 / 0.680424 (-0.448598) | 0.030119 / 0.534201 (-0.504082) | 0.532614 / 0.579283 (-0.046669) | 0.628968 / 0.434364 (0.194604) | 0.628403 / 0.540337 (0.088066) | 0.745648 / 1.386936 (-0.641288) |\n\n</details>\n</details>\n\n\n"
] | 2023-06-22T18:32:14 | 2023-06-22T18:42:22 | 2023-06-22T18:32:22 |
MEMBER
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5979",
"html_url": "https://github.com/huggingface/datasets/pull/5979",
"diff_url": "https://github.com/huggingface/datasets/pull/5979.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5979.patch",
"merged_at": "2023-06-22T18:32:22"
}
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5979/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5978
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5978/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5978/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5978/events
|
https://github.com/huggingface/datasets/pull/5978
| 1,770,187,053 |
PR_kwDODunzps5Tru2_
| 5,978 |
Release: 2.13.1
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006173 / 0.011353 (-0.005180) | 0.003773 / 0.011008 (-0.007235) | 0.099499 / 0.038508 (0.060991) | 0.037918 / 0.023109 (0.014809) | 0.321329 / 0.275898 (0.045431) | 0.379739 / 0.323480 (0.056259) | 0.004664 / 0.007986 (-0.003322) | 0.002943 / 0.004328 (-0.001385) | 0.077759 / 0.004250 (0.073509) | 0.055271 / 0.037052 (0.018219) | 0.329428 / 0.258489 (0.070939) | 0.378731 / 0.293841 (0.084890) | 0.027737 / 0.128546 (-0.100810) | 0.008566 / 0.075646 (-0.067081) | 0.313220 / 0.419271 (-0.106052) | 0.047101 / 0.043533 (0.003568) | 0.316211 / 0.255139 (0.061072) | 0.341826 / 0.283200 (0.058626) | 0.020838 / 0.141683 (-0.120845) | 1.550064 / 1.452155 (0.097909) | 1.706518 / 1.492716 (0.213801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203093 / 0.018006 (0.185087) | 0.425345 / 0.000490 (0.424856) | 0.004800 / 0.000200 (0.004600) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024590 / 0.037411 (-0.012821) | 0.098115 / 0.014526 (0.083589) | 0.108274 / 0.176557 (-0.068282) | 0.170804 / 0.737135 (-0.566332) | 0.110560 / 0.296338 (-0.185778) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425251 / 0.215209 (0.210042) | 4.239075 / 2.077655 (2.161421) | 1.955601 / 1.504120 (0.451481) | 1.774796 / 1.541195 (0.233602) | 1.826641 / 1.468490 (0.358150) | 0.558777 / 4.584777 (-4.026000) | 3.361697 / 3.745712 (-0.384015) | 1.764468 / 5.269862 (-3.505394) | 1.032280 / 4.565676 (-3.533396) | 0.067872 / 0.424275 (-0.356403) | 0.010998 / 0.007607 (0.003391) | 0.525682 / 0.226044 (0.299637) | 5.254356 / 2.268929 (2.985427) | 2.384332 / 55.444624 (-53.060292) | 2.045578 / 6.876477 (-4.830898) | 2.170914 / 2.142072 (0.028841) | 0.674782 / 4.805227 (-4.130445) | 0.135351 / 6.500664 (-6.365314) | 0.066591 / 0.075469 (-0.008878) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.209181 / 1.841788 (-0.632606) | 14.044518 / 8.074308 (5.970210) | 13.184705 / 10.191392 (2.993313) | 0.130836 / 0.680424 (-0.549588) | 0.016582 / 0.534201 (-0.517619) | 0.360005 / 0.579283 (-0.219279) | 0.379519 / 0.434364 (-0.054845) | 0.422174 / 0.540337 (-0.118164) | 0.515546 / 1.386936 (-0.871390) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006293 / 0.011353 (-0.005060) | 0.003784 / 0.011008 (-0.007224) | 0.079248 / 0.038508 (0.040739) | 0.038452 / 0.023109 (0.015343) | 0.444727 / 0.275898 (0.168829) | 0.500535 / 0.323480 (0.177055) | 0.003455 / 0.007986 (-0.004531) | 0.002873 / 0.004328 (-0.001455) | 0.077439 / 0.004250 (0.073189) | 0.047855 / 0.037052 (0.010803) | 0.448049 / 0.258489 (0.189560) | 0.509517 / 0.293841 (0.215676) | 0.028359 / 0.128546 (-0.100188) | 0.008503 / 0.075646 (-0.067143) | 0.084961 / 0.419271 (-0.334310) | 0.042880 / 0.043533 (-0.000653) | 0.436628 / 0.255139 (0.181489) | 0.456574 / 0.283200 (0.173375) | 0.019539 / 0.141683 (-0.122144) | 1.561273 / 1.452155 (0.109118) | 1.572018 / 1.492716 (0.079301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230250 / 0.018006 (0.212244) | 0.415189 / 0.000490 (0.414700) | 0.003213 / 0.000200 (0.003013) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025541 / 0.037411 (-0.011871) | 0.102326 / 0.014526 (0.087800) | 0.110258 / 0.176557 (-0.066298) | 0.162488 / 0.737135 (-0.574647) | 0.112782 / 0.296338 (-0.183556) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457936 / 0.215209 (0.242727) | 4.581503 / 2.077655 (2.503848) | 2.237659 / 1.504120 (0.733540) | 2.029960 / 1.541195 (0.488765) | 2.082911 / 1.468490 (0.614421) | 0.556485 / 4.584777 (-4.028292) | 3.384418 / 3.745712 (-0.361295) | 1.748809 / 5.269862 (-3.521053) | 1.034759 / 4.565676 (-3.530917) | 0.067500 / 0.424275 (-0.356776) | 0.011425 / 0.007607 (0.003818) | 0.561340 / 0.226044 (0.335295) | 5.623629 / 2.268929 (3.354701) | 2.733587 / 55.444624 (-52.711038) | 2.401578 / 6.876477 (-4.474899) | 2.524569 / 2.142072 (0.382496) | 0.673170 / 4.805227 (-4.132057) | 0.136681 / 6.500664 (-6.363983) | 0.068060 / 0.075469 (-0.007409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.318651 / 1.841788 (-0.523137) | 14.362123 / 8.074308 (6.287815) | 14.385964 / 10.191392 (4.194572) | 0.149914 / 0.680424 (-0.530510) | 0.016877 / 0.534201 (-0.517324) | 0.358406 / 0.579283 (-0.220877) | 0.394349 / 0.434364 (-0.040015) | 0.422471 / 0.540337 (-0.117866) | 0.513807 / 1.386936 (-0.873129) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006272 / 0.011353 (-0.005080) | 0.003903 / 0.011008 (-0.007105) | 0.100180 / 0.038508 (0.061672) | 0.037799 / 0.023109 (0.014690) | 0.385627 / 0.275898 (0.109729) | 0.446518 / 0.323480 (0.123038) | 0.004811 / 0.007986 (-0.003175) | 0.003032 / 0.004328 (-0.001296) | 0.077063 / 0.004250 (0.072812) | 0.055564 / 0.037052 (0.018512) | 0.397346 / 0.258489 (0.138857) | 0.443242 / 0.293841 (0.149401) | 0.027904 / 0.128546 (-0.100642) | 0.008386 / 0.075646 (-0.067260) | 0.315013 / 0.419271 (-0.104259) | 0.047943 / 0.043533 (0.004410) | 0.378443 / 0.255139 (0.123304) | 0.411472 / 0.283200 (0.128272) | 0.020465 / 0.141683 (-0.121218) | 1.526594 / 1.452155 (0.074439) | 1.547018 / 1.492716 (0.054301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219377 / 0.018006 (0.201370) | 0.430254 / 0.000490 (0.429764) | 0.003218 / 0.000200 (0.003018) | 0.000072 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023667 / 0.037411 (-0.013744) | 0.099143 / 0.014526 (0.084617) | 0.106044 / 0.176557 (-0.070513) | 0.166186 / 0.737135 (-0.570949) | 0.108736 / 0.296338 (-0.187603) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437971 / 0.215209 (0.222762) | 4.363675 / 2.077655 (2.286021) | 2.011993 / 1.504120 (0.507873) | 1.845189 / 1.541195 (0.303994) | 1.831848 / 1.468490 (0.363358) | 0.562402 / 4.584777 (-4.022375) | 3.365259 / 3.745712 (-0.380453) | 1.781491 / 5.269862 (-3.488371) | 1.023454 / 4.565676 (-3.542223) | 0.067857 / 0.424275 (-0.356418) | 0.011076 / 0.007607 (0.003469) | 0.532267 / 0.226044 (0.306223) | 5.340344 / 2.268929 (3.071415) | 2.388649 / 55.444624 (-53.055976) | 2.055373 / 6.876477 (-4.821104) | 2.205047 / 2.142072 (0.062975) | 0.672909 / 4.805227 (-4.132318) | 0.135244 / 6.500664 (-6.365420) | 0.066184 / 0.075469 (-0.009285) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206838 / 1.841788 (-0.634950) | 13.967075 / 8.074308 (5.892767) | 13.143971 / 10.191392 (2.952579) | 0.143991 / 0.680424 (-0.536433) | 0.016673 / 0.534201 (-0.517527) | 0.376180 / 0.579283 (-0.203103) | 0.386550 / 0.434364 (-0.047814) | 0.440590 / 0.540337 (-0.099747) | 0.529974 / 1.386936 (-0.856962) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006299 / 0.011353 (-0.005054) | 0.003784 / 0.011008 (-0.007224) | 0.077875 / 0.038508 (0.039367) | 0.038689 / 0.023109 (0.015580) | 0.421684 / 0.275898 (0.145786) | 0.472649 / 0.323480 (0.149169) | 0.003570 / 0.007986 (-0.004415) | 0.004448 / 0.004328 (0.000120) | 0.077867 / 0.004250 (0.073616) | 0.049514 / 0.037052 (0.012462) | 0.375983 / 0.258489 (0.117494) | 0.470632 / 0.293841 (0.176791) | 0.028238 / 0.128546 (-0.100308) | 0.008462 / 0.075646 (-0.067185) | 0.082452 / 0.419271 (-0.336819) | 0.043617 / 0.043533 (0.000084) | 0.400874 / 0.255139 (0.145735) | 0.426191 / 0.283200 (0.142992) | 0.020602 / 0.141683 (-0.121081) | 1.567658 / 1.452155 (0.115504) | 1.572610 / 1.492716 (0.079893) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246144 / 0.018006 (0.228138) | 0.419402 / 0.000490 (0.418913) | 0.001691 / 0.000200 (0.001491) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026105 / 0.037411 (-0.011306) | 0.104734 / 0.014526 (0.090208) | 0.110257 / 0.176557 (-0.066300) | 0.161429 / 0.737135 (-0.575706) | 0.114367 / 0.296338 (-0.181972) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.453352 / 0.215209 (0.238143) | 4.537924 / 2.077655 (2.460269) | 2.196193 / 1.504120 (0.692073) | 2.002087 / 1.541195 (0.460892) | 2.041722 / 1.468490 (0.573231) | 0.561643 / 4.584777 (-4.023134) | 3.449108 / 3.745712 (-0.296605) | 2.862800 / 5.269862 (-2.407062) | 1.387895 / 4.565676 (-3.177782) | 0.068076 / 0.424275 (-0.356199) | 0.011568 / 0.007607 (0.003961) | 0.559279 / 0.226044 (0.333235) | 5.598738 / 2.268929 (3.329809) | 2.676649 / 55.444624 (-52.767975) | 2.334588 / 6.876477 (-4.541889) | 2.376215 / 2.142072 (0.234142) | 0.673109 / 4.805227 (-4.132118) | 0.137587 / 6.500664 (-6.363077) | 0.069131 / 0.075469 (-0.006338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307332 / 1.841788 (-0.534456) | 14.536036 / 8.074308 (6.461728) | 14.173734 / 10.191392 (3.982342) | 0.145143 / 0.680424 (-0.535281) | 0.016662 / 0.534201 (-0.517539) | 0.366901 / 0.579283 (-0.212383) | 0.394498 / 0.434364 (-0.039866) | 0.430546 / 0.540337 (-0.109792) | 0.518950 / 1.386936 (-0.867986) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008122 / 0.011353 (-0.003231) | 0.005585 / 0.011008 (-0.005424) | 0.121219 / 0.038508 (0.082711) | 0.047616 / 0.023109 (0.024507) | 0.440576 / 0.275898 (0.164678) | 0.491053 / 0.323480 (0.167573) | 0.004774 / 0.007986 (-0.003211) | 0.006758 / 0.004328 (0.002430) | 0.103852 / 0.004250 (0.099602) | 0.071560 / 0.037052 (0.034508) | 0.463107 / 0.258489 (0.204618) | 0.516904 / 0.293841 (0.223063) | 0.048052 / 0.128546 (-0.080494) | 0.013679 / 0.075646 (-0.061968) | 0.428383 / 0.419271 (0.009112) | 0.069468 / 0.043533 (0.025936) | 0.432593 / 0.255139 (0.177454) | 0.471810 / 0.283200 (0.188611) | 0.037541 / 0.141683 (-0.104142) | 1.823490 / 1.452155 (0.371335) | 1.922558 / 1.492716 (0.429842) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252315 / 0.018006 (0.234309) | 0.541757 / 0.000490 (0.541267) | 0.000373 / 0.000200 (0.000173) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030361 / 0.037411 (-0.007050) | 0.125928 / 0.014526 (0.111402) | 0.145102 / 0.176557 (-0.031455) | 0.209798 / 0.737135 (-0.527337) | 0.147349 / 0.296338 (-0.148990) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.627554 / 0.215209 (0.412345) | 5.917422 / 2.077655 (3.839767) | 2.491083 / 1.504120 (0.986963) | 2.147078 / 1.541195 (0.605883) | 2.167511 / 1.468490 (0.699021) | 0.903061 / 4.584777 (-3.681716) | 5.518537 / 3.745712 (1.772825) | 2.654348 / 5.269862 (-2.615514) | 1.645121 / 4.565676 (-2.920556) | 0.103782 / 0.424275 (-0.320493) | 0.013048 / 0.007607 (0.005441) | 0.756732 / 0.226044 (0.530687) | 7.622873 / 2.268929 (5.353945) | 3.122689 / 55.444624 (-52.321936) | 2.537735 / 6.876477 (-4.338742) | 2.640090 / 2.142072 (0.498018) | 1.128635 / 4.805227 (-3.676593) | 0.228089 / 6.500664 (-6.272575) | 0.086207 / 0.075469 (0.010738) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.561591 / 1.841788 (-0.280197) | 18.110299 / 8.074308 (10.035991) | 20.718017 / 10.191392 (10.526625) | 0.225741 / 0.680424 (-0.454682) | 0.031738 / 0.534201 (-0.502463) | 0.530789 / 0.579283 (-0.048495) | 0.607364 / 0.434364 (0.173000) | 0.581593 / 0.540337 (0.041256) | 0.726033 / 1.386936 (-0.660903) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009323 / 0.011353 (-0.002030) | 0.005360 / 0.011008 (-0.005649) | 0.103608 / 0.038508 (0.065100) | 0.050158 / 0.023109 (0.027049) | 0.499906 / 0.275898 (0.224008) | 0.561005 / 0.323480 (0.237525) | 0.005093 / 0.007986 (-0.002892) | 0.008285 / 0.004328 (0.003956) | 0.103446 / 0.004250 (0.099196) | 0.061478 / 0.037052 (0.024426) | 0.494016 / 0.258489 (0.235527) | 0.537550 / 0.293841 (0.243709) | 0.048829 / 0.128546 (-0.079717) | 0.017032 / 0.075646 (-0.058614) | 0.107748 / 0.419271 (-0.311524) | 0.065607 / 0.043533 (0.022074) | 0.488709 / 0.255139 (0.233570) | 0.512023 / 0.283200 (0.228823) | 0.032067 / 0.141683 (-0.109616) | 1.907585 / 1.452155 (0.455431) | 1.960994 / 1.492716 (0.468278) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278378 / 0.018006 (0.260371) | 0.551474 / 0.000490 (0.550985) | 0.006886 / 0.000200 (0.006686) | 0.000106 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030674 / 0.037411 (-0.006737) | 0.135179 / 0.014526 (0.120654) | 0.133703 / 0.176557 (-0.042853) | 0.198923 / 0.737135 (-0.538212) | 0.155108 / 0.296338 (-0.141231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.690566 / 0.215209 (0.475357) | 6.789594 / 2.077655 (4.711940) | 2.940668 / 1.504120 (1.436549) | 2.562431 / 1.541195 (1.021236) | 2.554232 / 1.468490 (1.085742) | 0.888470 / 4.584777 (-3.696307) | 5.672318 / 3.745712 (1.926606) | 2.741626 / 5.269862 (-2.528236) | 1.818336 / 4.565676 (-2.747340) | 0.110434 / 0.424275 (-0.313841) | 0.014114 / 0.007607 (0.006507) | 0.830632 / 0.226044 (0.604588) | 8.270787 / 2.268929 (6.001859) | 3.723486 / 55.444624 (-51.721139) | 2.993671 / 6.876477 (-3.882806) | 2.918273 / 2.142072 (0.776201) | 1.105337 / 4.805227 (-3.699891) | 0.222976 / 6.500664 (-6.277688) | 0.085290 / 0.075469 (0.009820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.816027 / 1.841788 (-0.025760) | 18.496850 / 8.074308 (10.422541) | 20.457032 / 10.191392 (10.265640) | 0.243533 / 0.680424 (-0.436891) | 0.027044 / 0.534201 (-0.507157) | 0.500752 / 0.579283 (-0.078531) | 0.620963 / 0.434364 (0.186599) | 0.607995 / 0.540337 (0.067658) | 0.722915 / 1.386936 (-0.664021) |\n\n</details>\n</details>\n\n\n"
] | 2023-06-22T18:23:11 | 2023-06-22T18:40:24 | 2023-06-22T18:30:16 |
MEMBER
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5978",
"html_url": "https://github.com/huggingface/datasets/pull/5978",
"diff_url": "https://github.com/huggingface/datasets/pull/5978.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5978.patch",
"merged_at": "2023-06-22T18:30:16"
}
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5978/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5976
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5976/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5976/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5976/events
|
https://github.com/huggingface/datasets/pull/5976
| 1,768,503,913 |
PR_kwDODunzps5TmAFp
| 5,976 |
Avoid stuck map operation when subprocesses crashes
|
{
"login": "pappacena",
"id": 1213561,
"node_id": "MDQ6VXNlcjEyMTM1NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1213561?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pappacena",
"html_url": "https://github.com/pappacena",
"followers_url": "https://api.github.com/users/pappacena/followers",
"following_url": "https://api.github.com/users/pappacena/following{/other_user}",
"gists_url": "https://api.github.com/users/pappacena/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pappacena/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pappacena/subscriptions",
"organizations_url": "https://api.github.com/users/pappacena/orgs",
"repos_url": "https://api.github.com/users/pappacena/repos",
"events_url": "https://api.github.com/users/pappacena/events{/privacy}",
"received_events_url": "https://api.github.com/users/pappacena/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi ! Do you think this can be fixed at the Pool level ? Ideally it should be the Pool responsibility to handle this, not the `map` code. We could even subclass Pool if needed (at least the one from `multiprocess`)",
"@lhoestq it makes sense to me. Just pushed a refactoring creating a `class ProcessPool(multiprocess.pool.Pool)` to keep track of the PID changes.",
"_The documentation is not available anymore as the PR was closed or merged._",
"I managed to raise an error without subclassing Pool with two additions to `iflatmap_unordered`:\r\n\r\n1. at the beggining\r\n```python\r\noriginal_pool = list(pool._pool)\r\n```\r\n\r\n2. in the loop\r\n```python\r\nif any(async_result._pool != original_pool for async_result in async_results) and queue.empty():\r\n raise RuntimeError(\r\n \"One of the subprocesses has abruptly died during map operation.\"\r\n \"To debug the error, disable multiprocessing.\"\r\n )\r\n```\r\n\r\nIt's still a fix that only works for `iflatmap_unordered` (so not for map, imap etc) but is maybe simpler that subclassing. It also works for both multiprocessing.Pool and multiprocess.Pool",
"@lhoestq sorry for the delay. Busy weeks here. \r\n\r\nI just pushed the change you requested. It looks closer to the original proposal, actually.\r\n\r\nIt seems that `map` actually uses `iflatmap_unordered` ([here](https://github.com/huggingface/datasets/blob/819bb4346434912eb405ce3f3e9f21dc25a2fe85/src/datasets/arrow_dataset.py#L1509)). I think this solution works fine for the `map` method (which is the one being tested by the new `tests/test_arrow_dataset.py::BaseDatasetTest::test_map_crash_subprocess`, right?).",
"Yes fixing iflatmap_unordered does fix Dataset.map, but it won't fix any Pool.map that we may use elsewhere so we'll have to keep this in mind.",
"It looks all good to me, feel free to fix code formatting by running `make style` and we can merge :)",
"> Yes fixing iflatmap_unordered does fix Dataset.map, but it won't fix any Pool.map that we may use elsewhere so we'll have to keep this in mind.\r\n\r\nRight, I agree. The best way moving forward is probably not using the buggy `multiprocess.Pool` anymore, and replace it with `concurrent.futures.ProcessPoolExecutor` as much as possible.\r\n\r\nAnyway, I've run `make style` now. Thanks for the support!",
"It looks like checking the async_result._pool doesn't always work - sorry about that. We might just go back to your original solution then. Would also be cool to open an issue in `multiprocess` to ask if they have a solution or if they plan to fix this.",
"@lhoestq no problem! Reverted to the previous version.\r\n\r\nTBH, given the discussions [in this python issue](https://github.com/python/cpython/issues/66587), I don't think the error in `multiprocess` will be merged upstream any time soon...",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006060 / 0.011353 (-0.005293) | 0.003695 / 0.011008 (-0.007313) | 0.080484 / 0.038508 (0.041976) | 0.061894 / 0.023109 (0.038785) | 0.312510 / 0.275898 (0.036612) | 0.352398 / 0.323480 (0.028918) | 0.004638 / 0.007986 (-0.003348) | 0.002918 / 0.004328 (-0.001410) | 0.062932 / 0.004250 (0.058681) | 0.050859 / 0.037052 (0.013807) | 0.316812 / 0.258489 (0.058323) | 0.357684 / 0.293841 (0.063843) | 0.027622 / 0.128546 (-0.100924) | 0.008012 / 0.075646 (-0.067634) | 0.260970 / 0.419271 (-0.158302) | 0.045807 / 0.043533 (0.002275) | 0.321235 / 0.255139 (0.066096) | 0.343162 / 0.283200 (0.059962) | 0.021136 / 0.141683 (-0.120547) | 1.465886 / 1.452155 (0.013731) | 1.500216 / 1.492716 (0.007500) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187286 / 0.018006 (0.169279) | 0.428724 / 0.000490 (0.428235) | 0.003029 / 0.000200 (0.002829) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022703 / 0.037411 (-0.014708) | 0.072740 / 0.014526 (0.058215) | 0.083436 / 0.176557 (-0.093120) | 0.144559 / 0.737135 (-0.592577) | 0.083958 / 0.296338 (-0.212380) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.435729 / 0.215209 (0.220520) | 4.351146 / 2.077655 (2.273491) | 2.316627 / 1.504120 (0.812508) | 2.144587 / 1.541195 (0.603393) | 2.209182 / 1.468490 (0.740692) | 0.501131 / 4.584777 (-4.083646) | 3.077085 / 3.745712 (-0.668627) | 4.353706 / 5.269862 (-0.916156) | 2.621523 / 4.565676 (-1.944154) | 0.058976 / 0.424275 (-0.365299) | 0.006467 / 0.007607 (-0.001141) | 0.506690 / 0.226044 (0.280646) | 5.085787 / 2.268929 (2.816858) | 2.731336 / 55.444624 (-52.713289) | 2.419451 / 6.876477 (-4.457025) | 2.583649 / 2.142072 (0.441577) | 0.589869 / 4.805227 (-4.215359) | 0.131040 / 6.500664 (-6.369624) | 0.061332 / 0.075469 (-0.014137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.220542 / 1.841788 (-0.621245) | 18.169643 / 8.074308 (10.095335) | 13.251704 / 10.191392 (3.060312) | 0.142952 / 0.680424 (-0.537472) | 0.016639 / 0.534201 (-0.517562) | 0.334851 / 0.579283 (-0.244432) | 0.361865 / 0.434364 (-0.072499) | 0.380933 / 0.540337 (-0.159404) | 0.527374 / 1.386936 (-0.859562) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006319 / 0.011353 (-0.005034) | 0.003778 / 0.011008 (-0.007231) | 0.062388 / 0.038508 (0.023880) | 0.062228 / 0.023109 (0.039119) | 0.373727 / 0.275898 (0.097829) | 0.399442 / 0.323480 (0.075962) | 0.005434 / 0.007986 (-0.002551) | 0.003020 / 0.004328 (-0.001308) | 0.062774 / 0.004250 (0.058524) | 0.052784 / 0.037052 (0.015732) | 0.376428 / 0.258489 (0.117939) | 0.405039 / 0.293841 (0.111198) | 0.027884 / 0.128546 (-0.100662) | 0.008086 / 0.075646 (-0.067561) | 0.067078 / 0.419271 (-0.352194) | 0.042927 / 0.043533 (-0.000606) | 0.372142 / 0.255139 (0.117003) | 0.389604 / 0.283200 (0.106405) | 0.021582 / 0.141683 (-0.120101) | 1.473332 / 1.452155 (0.021177) | 1.536018 / 1.492716 (0.043302) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184729 / 0.018006 (0.166723) | 0.421065 / 0.000490 (0.420575) | 0.002681 / 0.000200 (0.002481) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026067 / 0.037411 (-0.011344) | 0.077138 / 0.014526 (0.062612) | 0.085178 / 0.176557 (-0.091379) | 0.139681 / 0.737135 (-0.597454) | 0.087528 / 0.296338 (-0.208810) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444899 / 0.215209 (0.229690) | 4.459168 / 2.077655 (2.381513) | 2.408792 / 1.504120 (0.904672) | 2.237243 / 1.541195 (0.696048) | 2.296298 / 1.468490 (0.827808) | 0.498508 / 4.584777 (-4.086269) | 3.067064 / 3.745712 (-0.678648) | 4.470577 / 5.269862 (-0.799284) | 2.701972 / 4.565676 (-1.863705) | 0.057711 / 0.424275 (-0.366564) | 0.006443 / 0.007607 (-0.001164) | 0.524046 / 0.226044 (0.298002) | 5.229928 / 2.268929 (2.961000) | 2.862101 / 55.444624 (-52.582523) | 2.545972 / 6.876477 (-4.330504) | 2.606459 / 2.142072 (0.464387) | 0.593285 / 4.805227 (-4.211942) | 0.124913 / 6.500664 (-6.375751) | 0.061942 / 0.075469 (-0.013527) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.322162 / 1.841788 (-0.519625) | 18.745796 / 8.074308 (10.671488) | 13.955443 / 10.191392 (3.764051) | 0.145610 / 0.680424 (-0.534814) | 0.016817 / 0.534201 (-0.517384) | 0.331180 / 0.579283 (-0.248103) | 0.343019 / 0.434364 (-0.091345) | 0.379459 / 0.540337 (-0.160878) | 0.526403 / 1.386936 (-0.860533) |\n\n</details>\n</details>\n\n\n"
] | 2023-06-21T21:18:31 | 2023-07-10T09:58:39 | 2023-07-10T09:50:07 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5976",
"html_url": "https://github.com/huggingface/datasets/pull/5976",
"diff_url": "https://github.com/huggingface/datasets/pull/5976.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5976.patch",
"merged_at": "2023-07-10T09:50:07"
}
|
I've been using Dataset.map() with `num_proc=os.cpu_count()` to leverage multicore processing for my datasets, but from time to time I get stuck processes waiting forever. Apparently, when one of the subprocesses is abruptly killed (OOM killer, segfault, SIGKILL, etc), the main process keeps waiting for the async task sent to that child process to finish.
It seems to be easy to reproduce the issue with the following script:
```
import os
from datasets import Dataset, Features, Value
def do_stuck(item):
os.kill(os.getpid(), 9)
data = {
"col1": list(range(5)),
"col2": list(range(5)),
}
ds = Dataset.from_dict(
data,
features=Features({
"col1": Value("int64"),
"col2": Value("int64"),
}),
)
print(ds.map(do_stuck, num_proc=4))
```
This is an old behavior in Python, which apparently was fixed a few years ago in `concurrent.futures.ProcessPoolExecutor` ([ref](https://bugs.python.org/issue9205)), but not in `multiprocessing.pool.Pool` / `multiprocess.pool.Pool`, which is used by `Dataset.map` ([ref](https://bugs.python.org/issue22393)).
This PR is an idea to try to detect when a child process gets killed, and raises a `RuntimeError` warning the dataset.map() caller.
EDIT: Related proposal for future improvement: https://github.com/huggingface/datasets/discussions/5977
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5976/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5976/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5975
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5975/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5975/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5975/events
|
https://github.com/huggingface/datasets/issues/5975
| 1,768,271,343 |
I_kwDODunzps5pZa3v
| 5,975 |
Streaming Dataset behind Proxy - FileNotFoundError
|
{
"login": "Veluchs",
"id": 135350576,
"node_id": "U_kgDOCBFJMA",
"avatar_url": "https://avatars.githubusercontent.com/u/135350576?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Veluchs",
"html_url": "https://github.com/Veluchs",
"followers_url": "https://api.github.com/users/Veluchs/followers",
"following_url": "https://api.github.com/users/Veluchs/following{/other_user}",
"gists_url": "https://api.github.com/users/Veluchs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Veluchs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Veluchs/subscriptions",
"organizations_url": "https://api.github.com/users/Veluchs/orgs",
"repos_url": "https://api.github.com/users/Veluchs/repos",
"events_url": "https://api.github.com/users/Veluchs/events{/privacy}",
"received_events_url": "https://api.github.com/users/Veluchs/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Duplicate of #",
"Hi ! can you try to set the upper case environment variables `HTTP_PROXY` and `HTTPS_PROXY` ?\r\n\r\nWe use `aiohttp` for streaming and it uses case sensitive environment variables",
"Hi, thanks for the quick reply.\r\n\r\nI set the uppercase env variables with\r\n\r\n`\r\nos.environ['HTTP_PROXY'] = \"http://example.com:xxxx\" \r\nos.environ['HTTPS_PROXY'] = \"http://example.com:xxxx\" \r\n`\r\n\r\nHowever, I still get the same error.\r\n\r\nOne thing that could be helpfull: When downloading a dataset without streaming i get the following message:\r\n_HF google storage unreachable. Downloading and preparing it from source_.\r\nThe download does however work as expected.\r\n",
"Are you able to use `aiohttp` to get the file at `https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json` using your proxy ?",
"It only works when passing trust_env=True when creating the ClientSession, as well as setting ssl=False.\r\n\r\nWorking Example:\r\n\r\n```\r\nimport os\r\n\r\nos.environ['HTTP_PROXY'] = \"xyz\"\r\nos.environ['HTTPS_PROXY'] = \"xyz\"\r\n\r\nimport asyncio\r\nimport aiohttp\r\n\r\nasync def download_pep(url):\r\n async with aiohttp.ClientSession(trust_env=True) as session:\r\n print(\"1\")\r\n async with session.get(url, ssl=False) as resp:\r\n print(\"2\")\r\n content = await resp.text()\r\n print(content)\r\n return content\r\n\r\nasyncio.run(download_pep(\"https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json\"))\r\n```\r\n\r\n\r\n\r\nSSL Verification has been a problem with other packages as well. Usually I circumvent the problem by setting\r\n```\r\nimport ssl\r\nssl._create_default_https_context = ssl._create_unverified_context\r\n```\r\n(probably not the best idea for security), although here aiohttp does not seem to use this default context.",
"We do pass `trust_env` as well. Could you share the full stack trace you get when streaming using `datasets` ? That could help locate where we might have forgotten to pass `trust_env`",
"Is there a way to disable ssl verification when streaming a dataset. I suspect this might be the isssue with my proxy.\r\n\r\n\r\nHere you go:\r\n\r\n```\r\nFileNotFoundError Traceback (most recent call last)\r\nCell In[8], line 3\r\n 1 from datasets import load_dataset\r\n----> 3 ds = load_dataset(\"facebook/voxpopuli\", name=\"de\", streaming=True)\r\n 5 sample = next(iter(ds))\r\n\r\nFile [~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/load.py:1790](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/load.py:1790), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1788 # Return iterable dataset in case of streaming\r\n 1789 if streaming:\r\n-> 1790 return builder_instance.as_streaming_dataset(split=split)\r\n 1792 # Some datasets are already processed on the HF google storage\r\n 1793 # Don't try downloading from Google storage for the packaged datasets as text, json, csv or pandas\r\n 1794 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n\r\nFile [~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/builder.py:1281](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/builder.py:1281), in DatasetBuilder.as_streaming_dataset(self, split, base_path)\r\n 1274 dl_manager = StreamingDownloadManager(\r\n 1275 base_path=base_path or self.base_path,\r\n 1276 download_config=DownloadConfig(use_auth_token=self.use_auth_token, storage_options=self.storage_options),\r\n 1277 dataset_name=self.name,\r\n 1278 data_dir=self.config.data_dir,\r\n 1279 )\r\n 1280 self._check_manual_download(dl_manager)\r\n-> 1281 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 1282 # By default, return all splits\r\n 1283 if split is None:\r\n\r\nFile [~/.cache/huggingface/modules/datasets_modules/datasets/facebook--voxpopuli/b5ff837284f0778eefe0f642734e142d8c3f574eba8c9c8a4b13602297f73604/voxpopuli.py:120](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.cache/huggingface/modules/datasets_modules/datasets/facebook--voxpopuli/b5ff837284f0778eefe0f642734e142d8c3f574eba8c9c8a4b13602297f73604/voxpopuli.py:120), in Voxpopuli._split_generators(self, dl_manager)\r\n 118 def _split_generators(self, dl_manager):\r\n 119 n_shards_path = dl_manager.download_and_extract(_N_SHARDS_FILE)\r\n--> 120 with open(n_shards_path) as f:\r\n 121 n_shards = json.load(f)\r\n 123 if self.config.name == \"en_accented\":\r\n\r\nFile [~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/streaming.py:71](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/streaming.py:71), in extend_module_for_streaming..wrap_auth..wrapper(*args, **kwargs)\r\n 69 @wraps(function)\r\n 70 def wrapper(*args, **kwargs):\r\n---> 71 return function(*args, use_auth_token=use_auth_token, **kwargs)\r\n\r\nFile [~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:517](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:517), in xopen(file, mode, use_auth_token, *args, **kwargs)\r\n 515 except FileNotFoundError:\r\n 516 if file.startswith(config.HF_ENDPOINT):\r\n--> 517 raise FileNotFoundError(\r\n 518 file + \"\\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.\"\r\n 519 ) from None\r\n 520 else:\r\n 521 raise\r\n\r\nFileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json\r\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.\r\n```",
"> Is there a way to disable ssl verification when streaming a dataset.\r\n\r\nI don't think so.\r\n\r\nWe use `fsspec` HTTPFileSystem implementation that is based on `aiohttp`. If you register a subclass of HTTPFileSystem that has SSL disabled by default it could work, but I wouldn't recommended it because it can raise security issues.",
"Okay thanks for your help! I guess I have to figure out how to improve the proxy environment / see if I can make it work with ssl connections."
] | 2023-06-21T19:10:02 | 2023-06-30T05:55:39 | 2023-06-30T05:55:38 |
NONE
| null | null | null |
### Describe the bug
When trying to stream a dataset i get the following error after a few minutes of waiting.
```
FileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json
If the repo is private or gated, make sure to log in with `huggingface-cli login`.
```
I have already set the proxy environment variables. Downloading a Dataset without streaming works as expected.
Still i suspect that this is connected to being behind a proxy.
Is there a way to set the proxy for streaming datasets? Possibly a keyword argument that gets passed to ffspec?
### Steps to reproduce the bug
This is the code i use.
```
import os
os.environ['http_proxy'] = "http://example.com:xxxx"
os.environ['https_proxy'] = "http://example.com:xxxx"
from datasets import load_dataset
ds = load_dataset("facebook/voxpopuli", name="de", streaming=True)
```
### Expected behavior
I would expect the streaming functionality to use the set proxy settings.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5975/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5974
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5974/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5974/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5974/events
|
https://github.com/huggingface/datasets/pull/5974
| 1,767,981,231 |
PR_kwDODunzps5TkXCb
| 5,974 |
Deprecate `errors` param in favor of `encoding_errors` in text builder
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006518 / 0.011353 (-0.004835) | 0.004121 / 0.011008 (-0.006887) | 0.103350 / 0.038508 (0.064842) | 0.045030 / 0.023109 (0.021920) | 0.351670 / 0.275898 (0.075772) | 0.408110 / 0.323480 (0.084630) | 0.003883 / 0.007986 (-0.004102) | 0.003352 / 0.004328 (-0.000977) | 0.078786 / 0.004250 (0.074535) | 0.063977 / 0.037052 (0.026925) | 0.369759 / 0.258489 (0.111270) | 0.415103 / 0.293841 (0.121262) | 0.033069 / 0.128546 (-0.095477) | 0.008863 / 0.075646 (-0.066783) | 0.353660 / 0.419271 (-0.065611) | 0.055714 / 0.043533 (0.012181) | 0.350458 / 0.255139 (0.095319) | 0.369505 / 0.283200 (0.086305) | 0.022822 / 0.141683 (-0.118861) | 1.537588 / 1.452155 (0.085433) | 1.590569 / 1.492716 (0.097853) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206826 / 0.018006 (0.188819) | 0.471625 / 0.000490 (0.471135) | 0.005188 / 0.000200 (0.004988) | 0.000316 / 0.000054 (0.000261) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028148 / 0.037411 (-0.009263) | 0.111941 / 0.014526 (0.097415) | 0.122106 / 0.176557 (-0.054451) | 0.181127 / 0.737135 (-0.556009) | 0.127534 / 0.296338 (-0.168805) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409520 / 0.215209 (0.194311) | 4.098455 / 2.077655 (2.020800) | 1.852447 / 1.504120 (0.348327) | 1.657036 / 1.541195 (0.115842) | 1.709624 / 1.468490 (0.241134) | 0.542806 / 4.584777 (-4.041970) | 3.809352 / 3.745712 (0.063640) | 1.855412 / 5.269862 (-3.414449) | 1.109180 / 4.565676 (-3.456497) | 0.066801 / 0.424275 (-0.357474) | 0.011832 / 0.007607 (0.004225) | 0.518338 / 0.226044 (0.292293) | 5.190108 / 2.268929 (2.921179) | 2.320602 / 55.444624 (-53.124023) | 1.991416 / 6.876477 (-4.885060) | 2.106989 / 2.142072 (-0.035084) | 0.668914 / 4.805227 (-4.136313) | 0.145325 / 6.500664 (-6.355340) | 0.065145 / 0.075469 (-0.010324) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254706 / 1.841788 (-0.587082) | 14.707264 / 8.074308 (6.632956) | 14.615423 / 10.191392 (4.424031) | 0.170764 / 0.680424 (-0.509659) | 0.017905 / 0.534201 (-0.516296) | 0.435606 / 0.579283 (-0.143677) | 0.434648 / 0.434364 (0.000284) | 0.520813 / 0.540337 (-0.019524) | 0.633902 / 1.386936 (-0.753034) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007212 / 0.011353 (-0.004141) | 0.004301 / 0.011008 (-0.006707) | 0.080767 / 0.038508 (0.042258) | 0.051949 / 0.023109 (0.028840) | 0.398473 / 0.275898 (0.122575) | 0.465038 / 0.323480 (0.141558) | 0.005580 / 0.007986 (-0.002406) | 0.003556 / 0.004328 (-0.000773) | 0.080682 / 0.004250 (0.076431) | 0.059517 / 0.037052 (0.022464) | 0.421171 / 0.258489 (0.162682) | 0.459752 / 0.293841 (0.165911) | 0.032960 / 0.128546 (-0.095586) | 0.009107 / 0.075646 (-0.066539) | 0.086382 / 0.419271 (-0.332889) | 0.056053 / 0.043533 (0.012520) | 0.393357 / 0.255139 (0.138218) | 0.412972 / 0.283200 (0.129772) | 0.031115 / 0.141683 (-0.110568) | 1.576961 / 1.452155 (0.124806) | 1.627249 / 1.492716 (0.134533) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227618 / 0.018006 (0.209612) | 0.444640 / 0.000490 (0.444150) | 0.004376 / 0.000200 (0.004176) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030826 / 0.037411 (-0.006586) | 0.117587 / 0.014526 (0.103062) | 0.127467 / 0.176557 (-0.049089) | 0.184440 / 0.737135 (-0.552695) | 0.133664 / 0.296338 (-0.162675) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443183 / 0.215209 (0.227974) | 4.408312 / 2.077655 (2.330658) | 2.132487 / 1.504120 (0.628367) | 1.923632 / 1.541195 (0.382438) | 1.967882 / 1.468490 (0.499392) | 0.552954 / 4.584777 (-4.031823) | 3.777701 / 3.745712 (0.031989) | 1.857686 / 5.269862 (-3.412176) | 1.104847 / 4.565676 (-3.460829) | 0.068350 / 0.424275 (-0.355925) | 0.012437 / 0.007607 (0.004830) | 0.559258 / 0.226044 (0.333214) | 5.593258 / 2.268929 (3.324330) | 2.648059 / 55.444624 (-52.796565) | 2.277428 / 6.876477 (-4.599049) | 2.351685 / 2.142072 (0.209612) | 0.678750 / 4.805227 (-4.126477) | 0.145550 / 6.500664 (-6.355114) | 0.066556 / 0.075469 (-0.008913) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.327128 / 1.841788 (-0.514659) | 15.649079 / 8.074308 (7.574771) | 14.478659 / 10.191392 (4.287267) | 0.147633 / 0.680424 (-0.532791) | 0.018502 / 0.534201 (-0.515699) | 0.438556 / 0.579283 (-0.140727) | 0.433381 / 0.434364 (-0.000983) | 0.514367 / 0.540337 (-0.025970) | 0.618347 / 1.386936 (-0.768589) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006078 / 0.011353 (-0.005275) | 0.003914 / 0.011008 (-0.007095) | 0.102039 / 0.038508 (0.063531) | 0.037660 / 0.023109 (0.014551) | 0.348963 / 0.275898 (0.073065) | 0.407284 / 0.323480 (0.083804) | 0.004661 / 0.007986 (-0.003324) | 0.003253 / 0.004328 (-0.001076) | 0.078276 / 0.004250 (0.074025) | 0.054144 / 0.037052 (0.017091) | 0.376715 / 0.258489 (0.118225) | 0.418499 / 0.293841 (0.124658) | 0.027627 / 0.128546 (-0.100919) | 0.008494 / 0.075646 (-0.067152) | 0.316894 / 0.419271 (-0.102377) | 0.046560 / 0.043533 (0.003027) | 0.339835 / 0.255139 (0.084696) | 0.374628 / 0.283200 (0.091428) | 0.020729 / 0.141683 (-0.120954) | 1.502769 / 1.452155 (0.050615) | 1.548756 / 1.492716 (0.056040) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229192 / 0.018006 (0.211186) | 0.426245 / 0.000490 (0.425756) | 0.005190 / 0.000200 (0.004990) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024271 / 0.037411 (-0.013140) | 0.098869 / 0.014526 (0.084343) | 0.105079 / 0.176557 (-0.071477) | 0.164707 / 0.737135 (-0.572428) | 0.110337 / 0.296338 (-0.186002) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426593 / 0.215209 (0.211383) | 4.293977 / 2.077655 (2.216323) | 1.928502 / 1.504120 (0.424382) | 1.728623 / 1.541195 (0.187428) | 1.792084 / 1.468490 (0.323594) | 0.568737 / 4.584777 (-4.016040) | 3.438534 / 3.745712 (-0.307178) | 1.797798 / 5.269862 (-3.472063) | 1.054078 / 4.565676 (-3.511598) | 0.068711 / 0.424275 (-0.355564) | 0.011250 / 0.007607 (0.003643) | 0.529299 / 0.226044 (0.303255) | 5.283965 / 2.268929 (3.015037) | 2.358274 / 55.444624 (-53.086350) | 2.012818 / 6.876477 (-4.863659) | 2.109923 / 2.142072 (-0.032149) | 0.679556 / 4.805227 (-4.125671) | 0.138346 / 6.500664 (-6.362318) | 0.066349 / 0.075469 (-0.009120) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193994 / 1.841788 (-0.647794) | 14.073158 / 8.074308 (5.998850) | 13.488525 / 10.191392 (3.297133) | 0.144536 / 0.680424 (-0.535888) | 0.016748 / 0.534201 (-0.517453) | 0.362703 / 0.579283 (-0.216580) | 0.389511 / 0.434364 (-0.044853) | 0.427296 / 0.540337 (-0.113041) | 0.513227 / 1.386936 (-0.873709) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006215 / 0.011353 (-0.005138) | 0.003834 / 0.011008 (-0.007174) | 0.078001 / 0.038508 (0.039493) | 0.036537 / 0.023109 (0.013428) | 0.369724 / 0.275898 (0.093826) | 0.426761 / 0.323480 (0.103281) | 0.003602 / 0.007986 (-0.004383) | 0.003001 / 0.004328 (-0.001327) | 0.075989 / 0.004250 (0.071739) | 0.048618 / 0.037052 (0.011566) | 0.374296 / 0.258489 (0.115807) | 0.430330 / 0.293841 (0.136489) | 0.028299 / 0.128546 (-0.100247) | 0.008537 / 0.075646 (-0.067109) | 0.083275 / 0.419271 (-0.335997) | 0.043136 / 0.043533 (-0.000397) | 0.359072 / 0.255139 (0.103933) | 0.387391 / 0.283200 (0.104192) | 0.021202 / 0.141683 (-0.120481) | 1.520832 / 1.452155 (0.068677) | 1.567030 / 1.492716 (0.074313) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230944 / 0.018006 (0.212938) | 0.422159 / 0.000490 (0.421669) | 0.003447 / 0.000200 (0.003247) | 0.000125 / 0.000054 (0.000071) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025442 / 0.037411 (-0.011969) | 0.103944 / 0.014526 (0.089418) | 0.110577 / 0.176557 (-0.065979) | 0.161393 / 0.737135 (-0.575743) | 0.113482 / 0.296338 (-0.182857) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.485765 / 0.215209 (0.270556) | 4.845737 / 2.077655 (2.768083) | 2.556732 / 1.504120 (1.052612) | 2.348638 / 1.541195 (0.807443) | 2.379289 / 1.468490 (0.910799) | 0.561261 / 4.584777 (-4.023516) | 3.482468 / 3.745712 (-0.263244) | 3.061319 / 5.269862 (-2.208543) | 1.483938 / 4.565676 (-3.081738) | 0.067584 / 0.424275 (-0.356691) | 0.011333 / 0.007607 (0.003726) | 0.594342 / 0.226044 (0.368297) | 5.935477 / 2.268929 (3.666548) | 3.025029 / 55.444624 (-52.419595) | 2.687032 / 6.876477 (-4.189445) | 2.752470 / 2.142072 (0.610398) | 0.674470 / 4.805227 (-4.130757) | 0.136777 / 6.500664 (-6.363887) | 0.068335 / 0.075469 (-0.007134) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336456 / 1.841788 (-0.505332) | 14.376007 / 8.074308 (6.301699) | 14.171375 / 10.191392 (3.979983) | 0.159620 / 0.680424 (-0.520804) | 0.016685 / 0.534201 (-0.517516) | 0.364344 / 0.579283 (-0.214939) | 0.395358 / 0.434364 (-0.039006) | 0.424876 / 0.540337 (-0.115461) | 0.513267 / 1.386936 (-0.873669) |\n\n</details>\n</details>\n\n\n"
] | 2023-06-21T16:31:38 | 2023-06-26T10:34:43 | 2023-06-26T10:27:40 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5974",
"html_url": "https://github.com/huggingface/datasets/pull/5974",
"diff_url": "https://github.com/huggingface/datasets/pull/5974.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5974.patch",
"merged_at": "2023-06-26T10:27:40"
}
|
For consistency with the JSON builder and Pandas
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5974/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5972
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5972/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5972/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5972/events
|
https://github.com/huggingface/datasets/pull/5972
| 1,767,897,485 |
PR_kwDODunzps5TkE7K
| 5,972 |
Filter unsupported extensions
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006983 / 0.011353 (-0.004369) | 0.004473 / 0.011008 (-0.006535) | 0.105158 / 0.038508 (0.066650) | 0.048973 / 0.023109 (0.025864) | 0.358771 / 0.275898 (0.082873) | 0.432389 / 0.323480 (0.108909) | 0.005689 / 0.007986 (-0.002297) | 0.003584 / 0.004328 (-0.000744) | 0.080852 / 0.004250 (0.076601) | 0.066133 / 0.037052 (0.029081) | 0.370981 / 0.258489 (0.112492) | 0.406942 / 0.293841 (0.113101) | 0.032123 / 0.128546 (-0.096424) | 0.009313 / 0.075646 (-0.066333) | 0.355220 / 0.419271 (-0.064051) | 0.055768 / 0.043533 (0.012235) | 0.370545 / 0.255139 (0.115406) | 0.375619 / 0.283200 (0.092419) | 0.024258 / 0.141683 (-0.117425) | 1.559073 / 1.452155 (0.106918) | 1.616520 / 1.492716 (0.123804) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.277893 / 0.018006 (0.259887) | 0.535447 / 0.000490 (0.534957) | 0.004877 / 0.000200 (0.004677) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029444 / 0.037411 (-0.007968) | 0.114366 / 0.014526 (0.099841) | 0.130957 / 0.176557 (-0.045599) | 0.189604 / 0.737135 (-0.547531) | 0.131682 / 0.296338 (-0.164656) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412315 / 0.215209 (0.197106) | 4.093879 / 2.077655 (2.016225) | 1.856169 / 1.504120 (0.352050) | 1.655358 / 1.541195 (0.114164) | 1.758190 / 1.468490 (0.289699) | 0.545829 / 4.584777 (-4.038948) | 3.871436 / 3.745712 (0.125724) | 1.938244 / 5.269862 (-3.331618) | 1.122727 / 4.565676 (-3.442950) | 0.067107 / 0.424275 (-0.357168) | 0.012012 / 0.007607 (0.004405) | 0.518868 / 0.226044 (0.292824) | 5.235081 / 2.268929 (2.966153) | 2.335115 / 55.444624 (-53.109509) | 2.013074 / 6.876477 (-4.863402) | 2.219808 / 2.142072 (0.077735) | 0.674602 / 4.805227 (-4.130626) | 0.147051 / 6.500664 (-6.353613) | 0.068444 / 0.075469 (-0.007025) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245600 / 1.841788 (-0.596188) | 15.537727 / 8.074308 (7.463419) | 15.074300 / 10.191392 (4.882908) | 0.194217 / 0.680424 (-0.486207) | 0.018536 / 0.534201 (-0.515665) | 0.437085 / 0.579283 (-0.142198) | 0.441123 / 0.434364 (0.006759) | 0.530681 / 0.540337 (-0.009657) | 0.649154 / 1.386936 (-0.737782) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007243 / 0.011353 (-0.004110) | 0.004688 / 0.011008 (-0.006320) | 0.079809 / 0.038508 (0.041301) | 0.046915 / 0.023109 (0.023805) | 0.415144 / 0.275898 (0.139246) | 0.474867 / 0.323480 (0.151388) | 0.004550 / 0.007986 (-0.003435) | 0.004585 / 0.004328 (0.000257) | 0.080837 / 0.004250 (0.076587) | 0.061667 / 0.037052 (0.024614) | 0.411321 / 0.258489 (0.152832) | 0.464195 / 0.293841 (0.170354) | 0.032510 / 0.128546 (-0.096037) | 0.009306 / 0.075646 (-0.066340) | 0.086637 / 0.419271 (-0.332635) | 0.053335 / 0.043533 (0.009802) | 0.402302 / 0.255139 (0.147163) | 0.424864 / 0.283200 (0.141664) | 0.026573 / 0.141683 (-0.115110) | 1.566793 / 1.452155 (0.114639) | 1.628118 / 1.492716 (0.135401) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.317802 / 0.018006 (0.299796) | 0.544593 / 0.000490 (0.544103) | 0.005690 / 0.000200 (0.005490) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033015 / 0.037411 (-0.004397) | 0.121940 / 0.014526 (0.107414) | 0.132920 / 0.176557 (-0.043637) | 0.191481 / 0.737135 (-0.545655) | 0.139139 / 0.296338 (-0.157199) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.460382 / 0.215209 (0.245173) | 4.610046 / 2.077655 (2.532392) | 2.296573 / 1.504120 (0.792453) | 2.099735 / 1.541195 (0.558540) | 2.213913 / 1.468490 (0.745423) | 0.544871 / 4.584777 (-4.039906) | 3.814174 / 3.745712 (0.068462) | 3.246397 / 5.269862 (-2.023464) | 1.480236 / 4.565676 (-3.085440) | 0.068464 / 0.424275 (-0.355811) | 0.012651 / 0.007607 (0.005043) | 0.564989 / 0.226044 (0.338944) | 5.639188 / 2.268929 (3.370259) | 2.827601 / 55.444624 (-52.617023) | 2.473743 / 6.876477 (-4.402734) | 2.567413 / 2.142072 (0.425340) | 0.674351 / 4.805227 (-4.130876) | 0.146248 / 6.500664 (-6.354416) | 0.067553 / 0.075469 (-0.007916) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.346703 / 1.841788 (-0.495085) | 16.494787 / 8.074308 (8.420479) | 15.179487 / 10.191392 (4.988095) | 0.181864 / 0.680424 (-0.498560) | 0.018857 / 0.534201 (-0.515344) | 0.437787 / 0.579283 (-0.141496) | 0.431770 / 0.434364 (-0.002594) | 0.507116 / 0.540337 (-0.033221) | 0.608899 / 1.386936 (-0.778037) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005963 / 0.011353 (-0.005390) | 0.003743 / 0.011008 (-0.007265) | 0.098519 / 0.038508 (0.060011) | 0.037392 / 0.023109 (0.014283) | 0.322706 / 0.275898 (0.046808) | 0.380032 / 0.323480 (0.056552) | 0.004694 / 0.007986 (-0.003292) | 0.002897 / 0.004328 (-0.001432) | 0.078664 / 0.004250 (0.074414) | 0.052646 / 0.037052 (0.015594) | 0.335523 / 0.258489 (0.077034) | 0.375464 / 0.293841 (0.081623) | 0.027537 / 0.128546 (-0.101010) | 0.008452 / 0.075646 (-0.067194) | 0.313844 / 0.419271 (-0.105427) | 0.047368 / 0.043533 (0.003835) | 0.313833 / 0.255139 (0.058694) | 0.342284 / 0.283200 (0.059085) | 0.021136 / 0.141683 (-0.120547) | 1.544764 / 1.452155 (0.092610) | 1.563850 / 1.492716 (0.071134) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.188609 / 0.018006 (0.170603) | 0.421686 / 0.000490 (0.421196) | 0.003336 / 0.000200 (0.003136) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023678 / 0.037411 (-0.013733) | 0.099191 / 0.014526 (0.084665) | 0.105819 / 0.176557 (-0.070738) | 0.169654 / 0.737135 (-0.567481) | 0.110240 / 0.296338 (-0.186099) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425497 / 0.215209 (0.210288) | 4.237165 / 2.077655 (2.159510) | 1.902953 / 1.504120 (0.398833) | 1.699012 / 1.541195 (0.157818) | 1.751107 / 1.468490 (0.282617) | 0.563326 / 4.584777 (-4.021451) | 3.394189 / 3.745712 (-0.351523) | 2.706129 / 5.269862 (-2.563732) | 1.361522 / 4.565676 (-3.204155) | 0.067776 / 0.424275 (-0.356499) | 0.010959 / 0.007607 (0.003352) | 0.530905 / 0.226044 (0.304860) | 5.322467 / 2.268929 (3.053538) | 2.384356 / 55.444624 (-53.060269) | 2.044196 / 6.876477 (-4.832281) | 2.119837 / 2.142072 (-0.022235) | 0.682236 / 4.805227 (-4.122991) | 0.136921 / 6.500664 (-6.363743) | 0.066784 / 0.075469 (-0.008685) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.210642 / 1.841788 (-0.631146) | 13.804572 / 8.074308 (5.730264) | 13.309229 / 10.191392 (3.117837) | 0.154356 / 0.680424 (-0.526068) | 0.016833 / 0.534201 (-0.517368) | 0.366503 / 0.579283 (-0.212780) | 0.385201 / 0.434364 (-0.049163) | 0.426713 / 0.540337 (-0.113624) | 0.516795 / 1.386936 (-0.870141) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006144 / 0.011353 (-0.005209) | 0.003723 / 0.011008 (-0.007285) | 0.077427 / 0.038508 (0.038919) | 0.037636 / 0.023109 (0.014527) | 0.375048 / 0.275898 (0.099150) | 0.442254 / 0.323480 (0.118774) | 0.003506 / 0.007986 (-0.004480) | 0.003751 / 0.004328 (-0.000577) | 0.076771 / 0.004250 (0.072521) | 0.047915 / 0.037052 (0.010862) | 0.378918 / 0.258489 (0.120429) | 0.435300 / 0.293841 (0.141459) | 0.028317 / 0.128546 (-0.100230) | 0.008413 / 0.075646 (-0.067233) | 0.082774 / 0.419271 (-0.336497) | 0.043211 / 0.043533 (-0.000321) | 0.362022 / 0.255139 (0.106883) | 0.404928 / 0.283200 (0.121728) | 0.020692 / 0.141683 (-0.120991) | 1.527303 / 1.452155 (0.075148) | 1.596091 / 1.492716 (0.103375) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225537 / 0.018006 (0.207530) | 0.399901 / 0.000490 (0.399412) | 0.000424 / 0.000200 (0.000224) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026483 / 0.037411 (-0.010928) | 0.104373 / 0.014526 (0.089847) | 0.111271 / 0.176557 (-0.065286) | 0.163872 / 0.737135 (-0.573264) | 0.113991 / 0.296338 (-0.182347) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456484 / 0.215209 (0.241275) | 4.572652 / 2.077655 (2.494998) | 2.374908 / 1.504120 (0.870788) | 2.207855 / 1.541195 (0.666661) | 2.260009 / 1.468490 (0.791519) | 0.562678 / 4.584777 (-4.022099) | 3.441778 / 3.745712 (-0.303934) | 1.729006 / 5.269862 (-3.540855) | 1.024937 / 4.565676 (-3.540739) | 0.068707 / 0.424275 (-0.355568) | 0.011334 / 0.007607 (0.003727) | 0.564293 / 0.226044 (0.338248) | 5.638367 / 2.268929 (3.369438) | 2.665654 / 55.444624 (-52.778970) | 2.320033 / 6.876477 (-4.556444) | 2.328706 / 2.142072 (0.186634) | 0.677433 / 4.805227 (-4.127794) | 0.137190 / 6.500664 (-6.363474) | 0.068585 / 0.075469 (-0.006885) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.312476 / 1.841788 (-0.529312) | 14.206685 / 8.074308 (6.132377) | 14.217928 / 10.191392 (4.026536) | 0.143416 / 0.680424 (-0.537007) | 0.016647 / 0.534201 (-0.517554) | 0.361228 / 0.579283 (-0.218055) | 0.396185 / 0.434364 (-0.038178) | 0.423275 / 0.540337 (-0.117063) | 0.512966 / 1.386936 (-0.873970) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008913 / 0.011353 (-0.002440) | 0.005142 / 0.011008 (-0.005866) | 0.133958 / 0.038508 (0.095449) | 0.049180 / 0.023109 (0.026071) | 0.389169 / 0.275898 (0.113270) | 0.481513 / 0.323480 (0.158033) | 0.006555 / 0.007986 (-0.001430) | 0.003806 / 0.004328 (-0.000522) | 0.102056 / 0.004250 (0.097806) | 0.083259 / 0.037052 (0.046207) | 0.392536 / 0.258489 (0.134047) | 0.447503 / 0.293841 (0.153662) | 0.047472 / 0.128546 (-0.081074) | 0.014748 / 0.075646 (-0.060899) | 0.475619 / 0.419271 (0.056348) | 0.107306 / 0.043533 (0.063773) | 0.421942 / 0.255139 (0.166803) | 0.419736 / 0.283200 (0.136536) | 0.044195 / 0.141683 (-0.097488) | 1.793840 / 1.452155 (0.341686) | 1.960204 / 1.492716 (0.467488) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252046 / 0.018006 (0.234040) | 0.627725 / 0.000490 (0.627236) | 0.007435 / 0.000200 (0.007235) | 0.000526 / 0.000054 (0.000472) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034656 / 0.037411 (-0.002755) | 0.114534 / 0.014526 (0.100008) | 0.135804 / 0.176557 (-0.040753) | 0.209309 / 0.737135 (-0.527826) | 0.140369 / 0.296338 (-0.155969) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.636736 / 0.215209 (0.421527) | 6.039985 / 2.077655 (3.962330) | 2.640141 / 1.504120 (1.136021) | 2.284492 / 1.541195 (0.743297) | 2.324956 / 1.468490 (0.856466) | 0.934499 / 4.584777 (-3.650278) | 5.673415 / 3.745712 (1.927703) | 5.184584 / 5.269862 (-0.085278) | 2.661911 / 4.565676 (-1.903766) | 0.150420 / 0.424275 (-0.273855) | 0.015655 / 0.007607 (0.008048) | 0.748290 / 0.226044 (0.522246) | 7.579755 / 2.268929 (5.310827) | 3.346732 / 55.444624 (-52.097892) | 2.708212 / 6.876477 (-4.168264) | 2.682423 / 2.142072 (0.540351) | 1.170389 / 4.805227 (-3.634838) | 0.215775 / 6.500664 (-6.284889) | 0.076360 / 0.075469 (0.000891) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.516794 / 1.841788 (-0.324993) | 18.709117 / 8.074308 (10.634809) | 22.492542 / 10.191392 (12.301150) | 0.237978 / 0.680424 (-0.442446) | 0.027828 / 0.534201 (-0.506373) | 0.499968 / 0.579283 (-0.079315) | 0.645899 / 0.434364 (0.211535) | 0.548599 / 0.540337 (0.008262) | 0.675428 / 1.386936 (-0.711508) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008469 / 0.011353 (-0.002884) | 0.005420 / 0.011008 (-0.005589) | 0.093340 / 0.038508 (0.054832) | 0.045896 / 0.023109 (0.022786) | 0.533267 / 0.275898 (0.257369) | 0.596034 / 0.323480 (0.272555) | 0.004816 / 0.007986 (-0.003170) | 0.004379 / 0.004328 (0.000051) | 0.096356 / 0.004250 (0.092106) | 0.058339 / 0.037052 (0.021287) | 0.574464 / 0.258489 (0.315975) | 0.649301 / 0.293841 (0.355461) | 0.047599 / 0.128546 (-0.080947) | 0.013759 / 0.075646 (-0.061887) | 0.104672 / 0.419271 (-0.314599) | 0.061658 / 0.043533 (0.018125) | 0.560956 / 0.255139 (0.305817) | 0.585328 / 0.283200 (0.302128) | 0.034137 / 0.141683 (-0.107546) | 1.844528 / 1.452155 (0.392373) | 1.971398 / 1.492716 (0.478682) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278666 / 0.018006 (0.260660) | 0.577342 / 0.000490 (0.576853) | 0.005496 / 0.000200 (0.005296) | 0.000131 / 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029863 / 0.037411 (-0.007549) | 0.161703 / 0.014526 (0.147177) | 0.132279 / 0.176557 (-0.044277) | 0.227345 / 0.737135 (-0.509791) | 0.138047 / 0.296338 (-0.158291) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.651535 / 0.215209 (0.436326) | 7.077949 / 2.077655 (5.000295) | 2.926990 / 1.504120 (1.422871) | 2.598872 / 1.541195 (1.057678) | 2.614192 / 1.468490 (1.145702) | 0.913845 / 4.584777 (-3.670932) | 5.704301 / 3.745712 (1.958589) | 2.796914 / 5.269862 (-2.472948) | 1.836096 / 4.565676 (-2.729580) | 0.106294 / 0.424275 (-0.317981) | 0.012705 / 0.007607 (0.005098) | 0.836336 / 0.226044 (0.610291) | 8.234079 / 2.268929 (5.965150) | 3.836410 / 55.444624 (-51.608215) | 3.116752 / 6.876477 (-3.759724) | 3.154258 / 2.142072 (1.012186) | 1.195794 / 4.805227 (-3.609434) | 0.240491 / 6.500664 (-6.260173) | 0.087913 / 0.075469 (0.012444) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.724723 / 1.841788 (-0.117064) | 19.492194 / 8.074308 (11.417885) | 21.443341 / 10.191392 (11.251949) | 0.245819 / 0.680424 (-0.434605) | 0.027024 / 0.534201 (-0.507177) | 0.481071 / 0.579283 (-0.098212) | 0.596359 / 0.434364 (0.161995) | 0.646462 / 0.540337 (0.106124) | 0.706380 / 1.386936 (-0.680556) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006634 / 0.011353 (-0.004719) | 0.004003 / 0.011008 (-0.007005) | 0.097874 / 0.038508 (0.059365) | 0.043528 / 0.023109 (0.020419) | 0.302293 / 0.275898 (0.026395) | 0.357041 / 0.323480 (0.033561) | 0.003761 / 0.007986 (-0.004225) | 0.004312 / 0.004328 (-0.000016) | 0.076253 / 0.004250 (0.072003) | 0.062807 / 0.037052 (0.025755) | 0.316737 / 0.258489 (0.058248) | 0.356722 / 0.293841 (0.062881) | 0.030816 / 0.128546 (-0.097730) | 0.008691 / 0.075646 (-0.066955) | 0.328366 / 0.419271 (-0.090906) | 0.062299 / 0.043533 (0.018766) | 0.293877 / 0.255139 (0.038738) | 0.319832 / 0.283200 (0.036632) | 0.024996 / 0.141683 (-0.116687) | 1.473912 / 1.452155 (0.021758) | 1.565439 / 1.492716 (0.072723) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208428 / 0.018006 (0.190422) | 0.435618 / 0.000490 (0.435128) | 0.000695 / 0.000200 (0.000495) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026253 / 0.037411 (-0.011158) | 0.106908 / 0.014526 (0.092382) | 0.117075 / 0.176557 (-0.059482) | 0.177969 / 0.737135 (-0.559166) | 0.123400 / 0.296338 (-0.172938) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424970 / 0.215209 (0.209761) | 4.203233 / 2.077655 (2.125578) | 2.009679 / 1.504120 (0.505559) | 1.825691 / 1.541195 (0.284496) | 1.870639 / 1.468490 (0.402149) | 0.530758 / 4.584777 (-4.054019) | 3.718791 / 3.745712 (-0.026921) | 1.800206 / 5.269862 (-3.469656) | 1.071651 / 4.565676 (-3.494025) | 0.065126 / 0.424275 (-0.359149) | 0.011312 / 0.007607 (0.003704) | 0.532503 / 0.226044 (0.306458) | 5.353950 / 2.268929 (3.085021) | 2.463548 / 55.444624 (-52.981076) | 2.139832 / 6.876477 (-4.736645) | 2.238722 / 2.142072 (0.096650) | 0.655736 / 4.805227 (-4.149492) | 0.141689 / 6.500664 (-6.358975) | 0.063282 / 0.075469 (-0.012187) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.183523 / 1.841788 (-0.658265) | 14.146428 / 8.074308 (6.072120) | 14.312883 / 10.191392 (4.121491) | 0.169286 / 0.680424 (-0.511138) | 0.017343 / 0.534201 (-0.516858) | 0.397934 / 0.579283 (-0.181349) | 0.417791 / 0.434364 (-0.016573) | 0.463639 / 0.540337 (-0.076698) | 0.562787 / 1.386936 (-0.824149) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006594 / 0.011353 (-0.004759) | 0.004086 / 0.011008 (-0.006922) | 0.075122 / 0.038508 (0.036614) | 0.041849 / 0.023109 (0.018740) | 0.362645 / 0.275898 (0.086747) | 0.464350 / 0.323480 (0.140870) | 0.003760 / 0.007986 (-0.004226) | 0.003327 / 0.004328 (-0.001001) | 0.076154 / 0.004250 (0.071904) | 0.053232 / 0.037052 (0.016180) | 0.407863 / 0.258489 (0.149374) | 0.460787 / 0.293841 (0.166946) | 0.031917 / 0.128546 (-0.096630) | 0.008770 / 0.075646 (-0.066876) | 0.082612 / 0.419271 (-0.336660) | 0.051311 / 0.043533 (0.007779) | 0.354508 / 0.255139 (0.099369) | 0.419533 / 0.283200 (0.136334) | 0.023980 / 0.141683 (-0.117703) | 1.491255 / 1.452155 (0.039100) | 1.536101 / 1.492716 (0.043384) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178261 / 0.018006 (0.160255) | 0.444680 / 0.000490 (0.444190) | 0.013761 / 0.000200 (0.013561) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027875 / 0.037411 (-0.009536) | 0.111269 / 0.014526 (0.096744) | 0.121096 / 0.176557 (-0.055461) | 0.174387 / 0.737135 (-0.562749) | 0.124714 / 0.296338 (-0.171624) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445422 / 0.215209 (0.230213) | 4.435877 / 2.077655 (2.358222) | 2.221895 / 1.504120 (0.717775) | 2.030571 / 1.541195 (0.489376) | 2.074863 / 1.468490 (0.606373) | 0.543331 / 4.584777 (-4.041446) | 3.753615 / 3.745712 (0.007903) | 3.317074 / 5.269862 (-1.952787) | 1.630390 / 4.565676 (-2.935286) | 0.066726 / 0.424275 (-0.357549) | 0.011556 / 0.007607 (0.003949) | 0.546985 / 0.226044 (0.320941) | 5.460634 / 2.268929 (3.191705) | 2.705945 / 55.444624 (-52.738679) | 2.373425 / 6.876477 (-4.503052) | 2.401472 / 2.142072 (0.259399) | 0.663225 / 4.805227 (-4.142002) | 0.143694 / 6.500664 (-6.356970) | 0.065283 / 0.075469 (-0.010186) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.264804 / 1.841788 (-0.576983) | 14.803228 / 8.074308 (6.728919) | 14.178514 / 10.191392 (3.987122) | 0.162651 / 0.680424 (-0.517772) | 0.017586 / 0.534201 (-0.516615) | 0.398740 / 0.579283 (-0.180543) | 0.414478 / 0.434364 (-0.019886) | 0.465442 / 0.540337 (-0.074895) | 0.563450 / 1.386936 (-0.823486) |\n\n</details>\n</details>\n\n\n"
] | 2023-06-21T15:43:01 | 2023-06-22T14:23:29 | 2023-06-22T14:16:26 |
MEMBER
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5972",
"html_url": "https://github.com/huggingface/datasets/pull/5972",
"diff_url": "https://github.com/huggingface/datasets/pull/5972.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5972.patch",
"merged_at": "2023-06-22T14:16:26"
}
|
I used a regex to filter the data files based on their extension for packaged builders.
I tried and a regex is 10x faster that using `in` to check if the extension is in the list of supported extensions.
Supersedes https://github.com/huggingface/datasets/pull/5850
Close https://github.com/huggingface/datasets/issues/5849
I also did a small change to favor the parquet module in case of a draw in the extension counter.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5972/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5971
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5971/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5971/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5971/events
|
https://github.com/huggingface/datasets/issues/5971
| 1,767,053,635 |
I_kwDODunzps5pUxlD
| 5,971 |
Docs: make "repository structure" easier to find
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
open
| false |
{
"login": "benjaminbrown038",
"id": 35114142,
"node_id": "MDQ6VXNlcjM1MTE0MTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benjaminbrown038",
"html_url": "https://github.com/benjaminbrown038",
"followers_url": "https://api.github.com/users/benjaminbrown038/followers",
"following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}",
"gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions",
"organizations_url": "https://api.github.com/users/benjaminbrown038/orgs",
"repos_url": "https://api.github.com/users/benjaminbrown038/repos",
"events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}",
"received_events_url": "https://api.github.com/users/benjaminbrown038/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "benjaminbrown038",
"id": 35114142,
"node_id": "MDQ6VXNlcjM1MTE0MTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benjaminbrown038",
"html_url": "https://github.com/benjaminbrown038",
"followers_url": "https://api.github.com/users/benjaminbrown038/followers",
"following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}",
"gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions",
"organizations_url": "https://api.github.com/users/benjaminbrown038/orgs",
"repos_url": "https://api.github.com/users/benjaminbrown038/repos",
"events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}",
"received_events_url": "https://api.github.com/users/benjaminbrown038/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Loading a local dataset also works the same way when `data_files` are not specified, so I agree we should make this info easier to discover \r\n\r\ncc @stevhliu ",
"Is this issue open? If so, I will self assign. ",
"@benjaminbrown038 Yes, it is. Maybe @stevhliu can give some pointers on improving this doc page's discoverability.",
"I think we can add a version of the [Main use-case](https://huggingface.co/docs/datasets/repository_structure#main-usecase) section to the [Share a dataset to the Hub](https://huggingface.co/docs/datasets/upload_dataset) tutorial. \r\n\r\nCurrently, it doesn't tell you *how* to structure the repository; it only tells you how to create it. So adding the \"main use-case\" will help bridge the gap and make it easier to find. We should also add a link to the [Structure your repository](https://huggingface.co/docs/datasets/repository_structure) guide for users who want to learn about the other options.",
"#self-assign"
] | 2023-06-21T08:26:44 | 2023-07-05T06:51:38 | null |
CONTRIBUTOR
| null | null | null |
The page https://huggingface.co/docs/datasets/repository_structure explains how to create a simple repository structure without a dataset script.
It's the simplest way to create a dataset and should be easier to find, particularly on the docs' first pages.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5971/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5971/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5970
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5970/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5970/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5970/events
|
https://github.com/huggingface/datasets/issues/5970
| 1,766,010,356 |
I_kwDODunzps5pQy30
| 5,970 |
description disappearing from Info when Uploading a Dataset Created with `from_dict`
|
{
"login": "balisujohn",
"id": 20377292,
"node_id": "MDQ6VXNlcjIwMzc3Mjky",
"avatar_url": "https://avatars.githubusercontent.com/u/20377292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/balisujohn",
"html_url": "https://github.com/balisujohn",
"followers_url": "https://api.github.com/users/balisujohn/followers",
"following_url": "https://api.github.com/users/balisujohn/following{/other_user}",
"gists_url": "https://api.github.com/users/balisujohn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/balisujohn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/balisujohn/subscriptions",
"organizations_url": "https://api.github.com/users/balisujohn/orgs",
"repos_url": "https://api.github.com/users/balisujohn/repos",
"events_url": "https://api.github.com/users/balisujohn/events{/privacy}",
"received_events_url": "https://api.github.com/users/balisujohn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"Here's a minimal way to reproduce the bug, for the sake of convenience.\r\n````\r\nfrom datasets import Dataset, DatasetInfo, load_dataset\r\n\r\n\r\nepisodes_dict = {\"test\":[1,2,3],\"test2\": [1,2,4]}\r\n\r\nhugging_face_dataset = Dataset.from_dict(\r\n episodes_dict, info=DatasetInfo(description=\"test_str\")\r\n)\r\nprint(hugging_face_dataset.info)\r\n\r\nhugging_face_dataset.push_to_hub(\"balisujohn/minari_test\", private=True)\r\n\r\nredownloaded_dataset= load_dataset(\"balisujohn/minari_test\")[\"train\"]\r\n\r\n\r\nprint(redownloaded_dataset.info)\r\n````\r\n",
"Thanks for reporting !\r\n\r\nFor now I would recommend uploading a separate JSON file for your metadata.\r\n\r\nAlternatively you can upload a second configuration of the dataset containing your metadata but this feature is not released yet (though you can already use it from [here](https://github.com/huggingface/datasets/pull/5331), it will be released soon)"
] | 2023-06-20T19:18:26 | 2023-06-22T14:23:56 | null |
NONE
| null | null | null |
### Describe the bug
When uploading a dataset created locally using `from_dict` with a specified `description` field. It appears before upload, but is missing after upload and re-download.
### Steps to reproduce the bug
I think the most relevant pattern in the code might be the following lines:
```
description_json_str = json.dumps(
{
"dataset_id": dataset.spec.dataset_id,
"env_name": dataset.spec.env_spec.id,
"action_space": serialize_space(dataset.spec.action_space),
"observation_space": serialize_space(dataset.spec.observation_space),
}
)
hugging_face_dataset = Dataset.from_dict(
episodes_dict, info=DatasetInfo(description=description_json_str)
)
```
Which comes from this function https://github.com/balisujohn/minarai/blob/8e023727f0a8488c4451651d9f7a79b981412c40/minari/integrations/hugging_face.py#L39
To replicate,
clone this branch of my Minari fork https://github.com/balisujohn/minarai/tree/dev-huggingface then run
```
python3.8 -m venv env
source env/bin/activate
python3 -m pip install -e .
python3 -m pip install pytest
```
The change the hugging face repo path in the test called `test_hugging_face_push_and_pull_dataset` in `tests/integrations/test_hugging_face.py` to one you have permissions to write to.
Then run:
```
pytest tests/integrations/test_hugging_face.py::test_hugging_face_push_and_pull_dataset
```
### Expected behavior
DATASET INFO BEFORE UPLOADING
DatasetInfo(description='{"dataset_id": "dummy-combo-test-v0", "env_name": "DummyComboEnv-v0", "action_space": "{\\"type\\": \\"Tuple\\", \\"subspaces\\": [{\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [2.0], \\"high\\": [3.0]}, {\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [4.0], \\"high\\": [5.0]}]}", "observation_space": "{\\"type\\": \\"Tuple\\", \\"subspaces\\": [{\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [2.0], \\"high\\": [3.0]}, {\\"type\\": \\"Tuple\\", \\"subspaces\\": [{\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [2.0], \\"high\\": [3.0]}, {\\"type\\": \\"Dict\\", \\"subspaces\\": {\\"component_1\\": {\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [-1.0], \\"high\\": [1.0]}, \\"component_2\\": {\\"type\\": \\"Dict\\", \\"subspaces\\": {\\"subcomponent_1\\": {\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [2.0], \\"high\\": [3.0]}, \\"subcomponent_2\\": {\\"type\\": \\"Tuple\\", \\"subspaces\\": [{\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [4.0], \\"high\\": [5.0]}, {\\"type\\": \\"Discrete\\", \\"dtype\\": \\"int64\\", \\"start\\": 0, \\"n\\": 10}]}}}}}]}]}"}', citation='', homepage='', license='', features={'observations': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'component_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'component_2': {'subcomponent_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'subcomponent_2': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Value(dtype='int64', id=None)}}}}}, 'actions': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None)}, 'rewards': Value(dtype='int64', id=None), 'truncations': Value(dtype='bool', id=None), 'terminations': Value(dtype='bool', id=None), 'episode_ids': Value(dtype='int64', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name=None, config_name=None, version=None, splits=None, download_checksums=None, download_size=None, post_processing_size=None, dataset_size=None, size_in_bytes=None)
...
DATASET INFO AFTER UPLOADING AND DOWNLOADING
DatasetInfo(description='', citation='', homepage='', license='', features={'observations': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'component_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'component_2': {'subcomponent_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'subcomponent_2': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Value(dtype='int64', id=None)}}}}}, 'actions': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None)}, 'rewards': Value(dtype='int64', id=None), 'truncations': Value(dtype='bool', id=None), 'terminations': Value(dtype='bool', id=None), 'episode_ids': Value(dtype='int64', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name=None, config_name=None, version=None, splits={'train': SplitInfo(name='train', num_bytes=4846, num_examples=60, shard_lengths=None, dataset_name='parquet')}, download_checksums={'https://huggingface.co/datasets/balisujohn/minari_test/resolve/8217b614ff9ba5edc1a30c7df430e92a46f65363/data/train-00000-of-00001-7c5900b93b35745e.parquet': {'num_bytes': 9052, 'checksum': None}}, download_size=9052, post_processing_size=None, dataset_size=4846, size_in_bytes=13898)
...
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5970/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5969
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5969/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5969/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5969/events
|
https://github.com/huggingface/datasets/pull/5969
| 1,765,529,905 |
PR_kwDODunzps5Tcgq4
| 5,969 |
Add `encoding` and `errors` params to JSON loader
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006770 / 0.011353 (-0.004583) | 0.004143 / 0.011008 (-0.006865) | 0.098928 / 0.038508 (0.060420) | 0.044893 / 0.023109 (0.021783) | 0.302630 / 0.275898 (0.026732) | 0.368173 / 0.323480 (0.044693) | 0.005631 / 0.007986 (-0.002354) | 0.003397 / 0.004328 (-0.000931) | 0.075748 / 0.004250 (0.071497) | 0.062582 / 0.037052 (0.025530) | 0.329586 / 0.258489 (0.071097) | 0.362625 / 0.293841 (0.068784) | 0.033250 / 0.128546 (-0.095296) | 0.008880 / 0.075646 (-0.066766) | 0.329683 / 0.419271 (-0.089588) | 0.054426 / 0.043533 (0.010893) | 0.297940 / 0.255139 (0.042801) | 0.319796 / 0.283200 (0.036597) | 0.023296 / 0.141683 (-0.118387) | 1.462142 / 1.452155 (0.009987) | 1.495796 / 1.492716 (0.003079) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201771 / 0.018006 (0.183765) | 0.454514 / 0.000490 (0.454024) | 0.003333 / 0.000200 (0.003133) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028084 / 0.037411 (-0.009327) | 0.109452 / 0.014526 (0.094926) | 0.119200 / 0.176557 (-0.057357) | 0.180302 / 0.737135 (-0.556834) | 0.125653 / 0.296338 (-0.170686) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409819 / 0.215209 (0.194610) | 4.055117 / 2.077655 (1.977462) | 1.855279 / 1.504120 (0.351159) | 1.655281 / 1.541195 (0.114086) | 1.687938 / 1.468490 (0.219448) | 0.528352 / 4.584777 (-4.056425) | 3.750250 / 3.745712 (0.004538) | 3.386741 / 5.269862 (-1.883121) | 1.572036 / 4.565676 (-2.993640) | 0.065125 / 0.424275 (-0.359150) | 0.011259 / 0.007607 (0.003652) | 0.513449 / 0.226044 (0.287405) | 5.139421 / 2.268929 (2.870492) | 2.316973 / 55.444624 (-53.127651) | 1.984109 / 6.876477 (-4.892368) | 2.127915 / 2.142072 (-0.014158) | 0.653238 / 4.805227 (-4.151989) | 0.142686 / 6.500664 (-6.357978) | 0.063666 / 0.075469 (-0.011803) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.185174 / 1.841788 (-0.656614) | 14.790282 / 8.074308 (6.715974) | 13.089222 / 10.191392 (2.897830) | 0.146055 / 0.680424 (-0.534369) | 0.017835 / 0.534201 (-0.516366) | 0.399598 / 0.579283 (-0.179685) | 0.425296 / 0.434364 (-0.009068) | 0.478552 / 0.540337 (-0.061786) | 0.579702 / 1.386936 (-0.807234) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006750 / 0.011353 (-0.004603) | 0.004156 / 0.011008 (-0.006853) | 0.074948 / 0.038508 (0.036440) | 0.043368 / 0.023109 (0.020259) | 0.355389 / 0.275898 (0.079491) | 0.429167 / 0.323480 (0.105687) | 0.003911 / 0.007986 (-0.004075) | 0.004340 / 0.004328 (0.000012) | 0.075940 / 0.004250 (0.071689) | 0.054293 / 0.037052 (0.017241) | 0.400317 / 0.258489 (0.141827) | 0.432001 / 0.293841 (0.138160) | 0.032340 / 0.128546 (-0.096206) | 0.008876 / 0.075646 (-0.066770) | 0.082284 / 0.419271 (-0.336987) | 0.050819 / 0.043533 (0.007286) | 0.351994 / 0.255139 (0.096855) | 0.375917 / 0.283200 (0.092717) | 0.022466 / 0.141683 (-0.119217) | 1.538824 / 1.452155 (0.086669) | 1.563995 / 1.492716 (0.071279) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227330 / 0.018006 (0.209323) | 0.446380 / 0.000490 (0.445890) | 0.000408 / 0.000200 (0.000208) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028534 / 0.037411 (-0.008878) | 0.113467 / 0.014526 (0.098941) | 0.123590 / 0.176557 (-0.052966) | 0.174309 / 0.737135 (-0.562827) | 0.130631 / 0.296338 (-0.165707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441020 / 0.215209 (0.225811) | 4.386564 / 2.077655 (2.308909) | 2.100704 / 1.504120 (0.596584) | 1.901484 / 1.541195 (0.360289) | 1.963494 / 1.468490 (0.495004) | 0.536838 / 4.584777 (-4.047939) | 3.739071 / 3.745712 (-0.006642) | 3.278981 / 5.269862 (-1.990881) | 1.515476 / 4.565676 (-3.050201) | 0.066388 / 0.424275 (-0.357887) | 0.011857 / 0.007607 (0.004250) | 0.545507 / 0.226044 (0.319463) | 5.441479 / 2.268929 (3.172550) | 2.602144 / 55.444624 (-52.842480) | 2.235583 / 6.876477 (-4.640894) | 2.293458 / 2.142072 (0.151385) | 0.658535 / 4.805227 (-4.146692) | 0.141327 / 6.500664 (-6.359337) | 0.063726 / 0.075469 (-0.011743) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.247819 / 1.841788 (-0.593968) | 15.234524 / 8.074308 (7.160216) | 14.592700 / 10.191392 (4.401308) | 0.141952 / 0.680424 (-0.538472) | 0.017747 / 0.534201 (-0.516454) | 0.396819 / 0.579283 (-0.182465) | 0.415902 / 0.434364 (-0.018462) | 0.464619 / 0.540337 (-0.075718) | 0.560866 / 1.386936 (-0.826070) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008278 / 0.011353 (-0.003075) | 0.005044 / 0.011008 (-0.005964) | 0.123382 / 0.038508 (0.084874) | 0.054039 / 0.023109 (0.030929) | 0.382338 / 0.275898 (0.106440) | 0.453287 / 0.323480 (0.129807) | 0.006342 / 0.007986 (-0.001644) | 0.003930 / 0.004328 (-0.000398) | 0.094039 / 0.004250 (0.089789) | 0.076525 / 0.037052 (0.039472) | 0.394066 / 0.258489 (0.135577) | 0.445600 / 0.293841 (0.151759) | 0.039348 / 0.128546 (-0.089199) | 0.010485 / 0.075646 (-0.065161) | 0.433730 / 0.419271 (0.014459) | 0.082671 / 0.043533 (0.039138) | 0.375250 / 0.255139 (0.120111) | 0.416269 / 0.283200 (0.133070) | 0.038397 / 0.141683 (-0.103286) | 1.864834 / 1.452155 (0.412680) | 2.010453 / 1.492716 (0.517737) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.240008 / 0.018006 (0.222002) | 0.470975 / 0.000490 (0.470485) | 0.004001 / 0.000200 (0.003801) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031107 / 0.037411 (-0.006304) | 0.129371 / 0.014526 (0.114846) | 0.141559 / 0.176557 (-0.034997) | 0.205571 / 0.737135 (-0.531564) | 0.144611 / 0.296338 (-0.151728) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506972 / 0.215209 (0.291763) | 5.055951 / 2.077655 (2.978296) | 2.397438 / 1.504120 (0.893318) | 2.170435 / 1.541195 (0.629240) | 2.240296 / 1.468490 (0.771806) | 0.641559 / 4.584777 (-3.943218) | 4.644772 / 3.745712 (0.899060) | 4.064200 / 5.269862 (-1.205662) | 1.946991 / 4.565676 (-2.618685) | 0.086413 / 0.424275 (-0.337862) | 0.015082 / 0.007607 (0.007475) | 0.670413 / 0.226044 (0.444369) | 6.331346 / 2.268929 (4.062418) | 2.965813 / 55.444624 (-52.478812) | 2.547952 / 6.876477 (-4.328524) | 2.718390 / 2.142072 (0.576318) | 0.796657 / 4.805227 (-4.008571) | 0.173229 / 6.500664 (-6.327435) | 0.079606 / 0.075469 (0.004137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.568761 / 1.841788 (-0.273026) | 18.485432 / 8.074308 (10.411124) | 15.758513 / 10.191392 (5.567121) | 0.170427 / 0.680424 (-0.509997) | 0.021421 / 0.534201 (-0.512780) | 0.518623 / 0.579283 (-0.060660) | 0.525887 / 0.434364 (0.091523) | 0.640331 / 0.540337 (0.099993) | 0.766748 / 1.386936 (-0.620188) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007680 / 0.011353 (-0.003673) | 0.005289 / 0.011008 (-0.005719) | 0.093773 / 0.038508 (0.055265) | 0.054997 / 0.023109 (0.031888) | 0.456277 / 0.275898 (0.180379) | 0.500642 / 0.323480 (0.177162) | 0.005935 / 0.007986 (-0.002050) | 0.004375 / 0.004328 (0.000047) | 0.094131 / 0.004250 (0.089881) | 0.063399 / 0.037052 (0.026347) | 0.470546 / 0.258489 (0.212057) | 0.504989 / 0.293841 (0.211148) | 0.038541 / 0.128546 (-0.090006) | 0.010403 / 0.075646 (-0.065244) | 0.102469 / 0.419271 (-0.316802) | 0.063105 / 0.043533 (0.019572) | 0.466005 / 0.255139 (0.210866) | 0.458677 / 0.283200 (0.175477) | 0.028407 / 0.141683 (-0.113276) | 1.893829 / 1.452155 (0.441675) | 1.917954 / 1.492716 (0.425238) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.272760 / 0.018006 (0.254754) | 0.476159 / 0.000490 (0.475669) | 0.008467 / 0.000200 (0.008267) | 0.000146 / 0.000054 (0.000091) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035755 / 0.037411 (-0.001656) | 0.145038 / 0.014526 (0.130512) | 0.148322 / 0.176557 (-0.028235) | 0.210193 / 0.737135 (-0.526943) | 0.156547 / 0.296338 (-0.139792) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.541204 / 0.215209 (0.325995) | 5.382746 / 2.077655 (3.305091) | 2.704229 / 1.504120 (1.200109) | 2.468422 / 1.541195 (0.927227) | 2.522672 / 1.468490 (1.054182) | 0.644899 / 4.584777 (-3.939878) | 4.654401 / 3.745712 (0.908689) | 2.159223 / 5.269862 (-3.110638) | 1.280098 / 4.565676 (-3.285578) | 0.080053 / 0.424275 (-0.344222) | 0.014383 / 0.007607 (0.006776) | 0.662770 / 0.226044 (0.436725) | 6.617651 / 2.268929 (4.348722) | 3.234347 / 55.444624 (-52.210277) | 2.861417 / 6.876477 (-4.015059) | 2.888928 / 2.142072 (0.746856) | 0.792854 / 4.805227 (-4.012374) | 0.172553 / 6.500664 (-6.328111) | 0.078402 / 0.075469 (0.002933) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.565351 / 1.841788 (-0.276436) | 18.681916 / 8.074308 (10.607608) | 17.264473 / 10.191392 (7.073081) | 0.168461 / 0.680424 (-0.511963) | 0.021353 / 0.534201 (-0.512848) | 0.517843 / 0.579283 (-0.061440) | 0.519907 / 0.434364 (0.085543) | 0.623687 / 0.540337 (0.083350) | 0.761796 / 1.386936 (-0.625140) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006750 / 0.011353 (-0.004603) | 0.004268 / 0.011008 (-0.006741) | 0.098644 / 0.038508 (0.060136) | 0.044643 / 0.023109 (0.021534) | 0.309420 / 0.275898 (0.033522) | 0.379294 / 0.323480 (0.055815) | 0.005729 / 0.007986 (-0.002256) | 0.003615 / 0.004328 (-0.000714) | 0.076086 / 0.004250 (0.071835) | 0.068994 / 0.037052 (0.031942) | 0.325653 / 0.258489 (0.067164) | 0.375187 / 0.293841 (0.081347) | 0.032546 / 0.128546 (-0.096000) | 0.009089 / 0.075646 (-0.066557) | 0.329905 / 0.419271 (-0.089366) | 0.066832 / 0.043533 (0.023300) | 0.299247 / 0.255139 (0.044108) | 0.323460 / 0.283200 (0.040260) | 0.034226 / 0.141683 (-0.107457) | 1.475659 / 1.452155 (0.023505) | 1.556234 / 1.492716 (0.063518) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292305 / 0.018006 (0.274299) | 0.542584 / 0.000490 (0.542094) | 0.003047 / 0.000200 (0.002847) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030096 / 0.037411 (-0.007315) | 0.112341 / 0.014526 (0.097815) | 0.124965 / 0.176557 (-0.051591) | 0.183159 / 0.737135 (-0.553976) | 0.131885 / 0.296338 (-0.164453) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426437 / 0.215209 (0.211228) | 4.260984 / 2.077655 (2.183330) | 2.078358 / 1.504120 (0.574238) | 1.877644 / 1.541195 (0.336449) | 2.044036 / 1.468490 (0.575546) | 0.532980 / 4.584777 (-4.051797) | 3.749573 / 3.745712 (0.003860) | 1.944155 / 5.269862 (-3.325706) | 1.090307 / 4.565676 (-3.475370) | 0.065445 / 0.424275 (-0.358830) | 0.011237 / 0.007607 (0.003630) | 0.521448 / 0.226044 (0.295403) | 5.213118 / 2.268929 (2.944189) | 2.507829 / 55.444624 (-52.936795) | 2.177179 / 6.876477 (-4.699297) | 2.351161 / 2.142072 (0.209088) | 0.656775 / 4.805227 (-4.148452) | 0.141207 / 6.500664 (-6.359457) | 0.063286 / 0.075469 (-0.012183) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.190281 / 1.841788 (-0.651506) | 15.327424 / 8.074308 (7.253116) | 13.300695 / 10.191392 (3.109303) | 0.190484 / 0.680424 (-0.489939) | 0.017984 / 0.534201 (-0.516217) | 0.405714 / 0.579283 (-0.173569) | 0.435915 / 0.434364 (0.001551) | 0.494083 / 0.540337 (-0.046254) | 0.600616 / 1.386936 (-0.786320) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006740 / 0.011353 (-0.004613) | 0.004289 / 0.011008 (-0.006719) | 0.076532 / 0.038508 (0.038024) | 0.043305 / 0.023109 (0.020196) | 0.356111 / 0.275898 (0.080213) | 0.434121 / 0.323480 (0.110641) | 0.005599 / 0.007986 (-0.002387) | 0.003461 / 0.004328 (-0.000868) | 0.077097 / 0.004250 (0.072847) | 0.055369 / 0.037052 (0.018317) | 0.367093 / 0.258489 (0.108604) | 0.418801 / 0.293841 (0.124960) | 0.032057 / 0.128546 (-0.096489) | 0.009048 / 0.075646 (-0.066599) | 0.082897 / 0.419271 (-0.336374) | 0.050287 / 0.043533 (0.006754) | 0.352060 / 0.255139 (0.096921) | 0.376278 / 0.283200 (0.093078) | 0.023924 / 0.141683 (-0.117759) | 1.522780 / 1.452155 (0.070626) | 1.578938 / 1.492716 (0.086222) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287317 / 0.018006 (0.269311) | 0.508490 / 0.000490 (0.508000) | 0.000431 / 0.000200 (0.000231) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031139 / 0.037411 (-0.006272) | 0.113927 / 0.014526 (0.099401) | 0.128147 / 0.176557 (-0.048409) | 0.179712 / 0.737135 (-0.557424) | 0.134364 / 0.296338 (-0.161975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452834 / 0.215209 (0.237625) | 4.507944 / 2.077655 (2.430289) | 2.287758 / 1.504120 (0.783638) | 2.091145 / 1.541195 (0.549951) | 2.196228 / 1.468490 (0.727738) | 0.539306 / 4.584777 (-4.045471) | 3.838941 / 3.745712 (0.093228) | 1.908801 / 5.269862 (-3.361060) | 1.139235 / 4.565676 (-3.426442) | 0.066677 / 0.424275 (-0.357599) | 0.011422 / 0.007607 (0.003815) | 0.562966 / 0.226044 (0.336921) | 5.633712 / 2.268929 (3.364784) | 2.788622 / 55.444624 (-52.656002) | 2.438465 / 6.876477 (-4.438012) | 2.523479 / 2.142072 (0.381407) | 0.668730 / 4.805227 (-4.136498) | 0.143977 / 6.500664 (-6.356687) | 0.064661 / 0.075469 (-0.010808) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291708 / 1.841788 (-0.550080) | 15.573316 / 8.074308 (7.499008) | 14.435099 / 10.191392 (4.243707) | 0.147745 / 0.680424 (-0.532679) | 0.017602 / 0.534201 (-0.516599) | 0.401560 / 0.579283 (-0.177723) | 0.429861 / 0.434364 (-0.004502) | 0.469800 / 0.540337 (-0.070538) | 0.567515 / 1.386936 (-0.819421) |\n\n</details>\n</details>\n\n\n"
] | 2023-06-20T14:28:35 | 2023-06-21T13:39:50 | 2023-06-21T13:32:22 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5969",
"html_url": "https://github.com/huggingface/datasets/pull/5969",
"diff_url": "https://github.com/huggingface/datasets/pull/5969.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5969.patch",
"merged_at": "2023-06-21T13:32:22"
}
|
"Requested" in https://discuss.huggingface.co/t/utf-16-for-datasets/43828/3.
`pd.read_json` also has these parameters, so it makes sense to be consistent.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5969/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5968
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5968/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5968/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5968/events
|
https://github.com/huggingface/datasets/issues/5968
| 1,765,252,561 |
I_kwDODunzps5pN53R
| 5,968 |
Common Voice datasets still need `use_auth_token=True`
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @pcuenca as well. \r\n\r\nNot super urgent btw",
"The issue commes from the dataset itself and is not related to the `datasets` lib\r\n\r\nsee https://huggingface.co/datasets/mozilla-foundation/common_voice_6_1/blob/2c475b3b88e0f2e5828f830a4b91618a25ff20b7/common_voice_6_1.py#L148-L152",
"Let's remove these lines in the dataset no? cc @anton-l @Vaibhavs10 ",
"Addressed in:\r\n\r\n* `mozilla-foundation/common_voice_1_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_1_0/discussions/4)\r\n* `mozilla-foundation/common_voice_2_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_2_0/discussions/3)\r\n* `mozilla-foundation/common_voice_3_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_3_0/discussions/3)\r\n* `mozilla-foundation/common_voice_4_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_4_0/discussions/3)\r\n* `mozilla-foundation/common_voice_5_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_5_0/discussions/3)\r\n* `mozilla-foundation/common_voice_5_1` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_5_1/discussions/3)\r\n* `mozilla-foundation/common_voice_6_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_6_0/discussions/3)\r\n* `mozilla-foundation/common_voice_6_1` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_6_1/discussions/3)\r\n* `mozilla-foundation/common_voice_7_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0/discussions/3)\r\n* `mozilla-foundation/common_voice_8_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/discussions/7)\r\n* `mozilla-foundation/common_voice_9_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0/discussions/8)\r\n* `mozilla-foundation/common_voice_10_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_10_0/discussions/7)"
] | 2023-06-20T11:58:37 | 2023-07-29T16:08:59 | 2023-07-29T16:08:58 |
MEMBER
| null | null | null |
### Describe the bug
We don't need to pass `use_auth_token=True` anymore to download gated datasets or models, so the following should work if correctly logged in.
```py
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_6_1", "tr", split="train+validation")
```
However it throws an error - probably because something weird is hardcoded into the dataset loading script.
### Steps to reproduce the bug
1.)
```
huggingface-cli login
```
2.) Make sure that you have accepted the license here:
https://huggingface.co/datasets/mozilla-foundation/common_voice_6_1
3.) Run:
```py
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_6_1", "tr", split="train+validation")
```
4.) You'll get:
```
File ~/hf/lib/python3.10/site-packages/datasets/builder.py:963, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
961 split_dict = SplitDict(dataset_name=self.name)
962 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 963 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
965 # Checksums verification
966 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_6_1/f4d7854c466f5bd4908988dbd39044ec4fc634d89e0515ab0c51715c0127ffe3/common_voice_6_1.py:150, in CommonVoice._split_generators(self, dl_manager)
148 hf_auth_token = dl_manager.download_config.use_auth_token
149 if hf_auth_token is None:
--> 150 raise ConnectionError(
151 "Please set use_auth_token=True or use_auth_token='<TOKEN>' to download this dataset"
152 )
154 bundle_url_template = STATS["bundleURLTemplate"]
155 bundle_version = bundle_url_template.split("/")[0]
ConnectionError: Please set use_auth_token=True or use_auth_token='<TOKEN>' to download this dataset
```
### Expected behavior
One should not have to pass `use_auth_token=True`. Also see discussion here: https://github.com/huggingface/blog/pull/1243#discussion_r1235131150
### Environment info
```
- `datasets` version: 2.13.0
- Platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.0.dev0
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5968/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5967
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5967/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5967/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5967/events
|
https://github.com/huggingface/datasets/issues/5967
| 1,763,926,520 |
I_kwDODunzps5pI2H4
| 5,967 |
Config name / split name lost after map with multiproc
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"This must be due to DatasetInfo.from_merge which drops them and is used in `concatenate_datasets`.\r\n\r\nAnd you're experiencing this issue because multiprocessing does concatenate the resulting datasets from each process.\r\n\r\nMaybe they should be kept if all the subdatasets share the same values for config_name and split",
"That sounds like a clean workaround!"
] | 2023-06-19T17:27:36 | 2023-06-28T08:55:25 | null |
CONTRIBUTOR
| null | null | null |
### Describe the bug
Performing a `.map` method on a dataset loses it's config name / split name only if run with multiproc
### Steps to reproduce the bug
```python
from datasets import Audio, load_dataset
from transformers import AutoFeatureExtractor
import numpy as np
# load dummy dataset
libri = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean")
# make train / test splits
libri = libri["validation"].train_test_split(seed=42, shuffle=True, test_size=0.1)
# example feature extractor
model_id = "ntu-spml/distilhubert"
feature_extractor = AutoFeatureExtractor.from_pretrained(model_id, do_normalize=True, return_attention_mask=True)
sampling_rate = feature_extractor.sampling_rate
libri = libri.cast_column("audio", Audio(sampling_rate=sampling_rate))
max_duration = 30.0
def preprocess_function(examples):
audio_arrays = [x["array"] for x in examples["audio"]]
inputs = feature_extractor(
audio_arrays,
sampling_rate=feature_extractor.sampling_rate,
max_length=int(feature_extractor.sampling_rate * max_duration),
truncation=True,
return_attention_mask=True,
)
return inputs
# single proc map
libri_encoded = libri.map(
preprocess_function, remove_columns=["audio", "file"], batched=True, num_proc=1
)
print(10 * "=" ,"Single processing", 10 * "=")
print("Config name before: ", libri["train"].config_name, " Split name before: ", libri["train"].split)
print("Config name after: ", libri_encoded["train"].config_name, " Split name after: ", libri_encoded["train"].split)
# multi proc map
libri_encoded = libri.map(
preprocess_function, remove_columns=["audio", "file"], batched=True, num_proc=2
)
print(10 * "=" ,"Multi processing", 10 * "=")
print("Config name before: ", libri["train"].config_name, " Split name before: ", libri["train"].split)
print("Config name after: ", libri_encoded["train"].config_name, " Split name after: ", libri_encoded["train"].split)
```
**Print Output:**
```
========== Single processing ==========
Config name before: clean Split name before: validation
Config name after: clean Split name after: validation
========== Multi processing ==========
Config name before: clean Split name before: validation
Config name after: None Split name after: None
```
=> we can see that the config/split names are lost in the multiprocessing setting
### Expected behavior
Should retain both config / split names in the multiproc setting
### Environment info
- `datasets` version: 2.13.1.dev0
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5967/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5966
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5966/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5966/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5966/events
|
https://github.com/huggingface/datasets/pull/5966
| 1,763,885,914 |
PR_kwDODunzps5TXBLP
| 5,966 |
Fix JSON generation in benchmarks CI
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006186 / 0.011353 (-0.005167) | 0.003744 / 0.011008 (-0.007264) | 0.097295 / 0.038508 (0.058787) | 0.037106 / 0.023109 (0.013997) | 0.424154 / 0.275898 (0.148256) | 0.474536 / 0.323480 (0.151057) | 0.003454 / 0.007986 (-0.004532) | 0.003865 / 0.004328 (-0.000463) | 0.077348 / 0.004250 (0.073097) | 0.051728 / 0.037052 (0.014675) | 0.437120 / 0.258489 (0.178631) | 0.478379 / 0.293841 (0.184538) | 0.028939 / 0.128546 (-0.099608) | 0.008376 / 0.075646 (-0.067270) | 0.312002 / 0.419271 (-0.107270) | 0.053723 / 0.043533 (0.010190) | 0.424815 / 0.255139 (0.169676) | 0.446203 / 0.283200 (0.163004) | 0.026553 / 0.141683 (-0.115130) | 1.479983 / 1.452155 (0.027828) | 1.530613 / 1.492716 (0.037896) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196627 / 0.018006 (0.178620) | 0.422361 / 0.000490 (0.421871) | 0.003442 / 0.000200 (0.003242) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022913 / 0.037411 (-0.014499) | 0.096011 / 0.014526 (0.081485) | 0.104091 / 0.176557 (-0.072466) | 0.163273 / 0.737135 (-0.573862) | 0.109142 / 0.296338 (-0.187197) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431032 / 0.215209 (0.215823) | 4.314391 / 2.077655 (2.236737) | 2.003812 / 1.504120 (0.499692) | 1.799538 / 1.541195 (0.258344) | 1.830026 / 1.468490 (0.361536) | 0.560131 / 4.584777 (-4.024646) | 3.368997 / 3.745712 (-0.376715) | 1.703032 / 5.269862 (-3.566830) | 1.026949 / 4.565676 (-3.538727) | 0.067507 / 0.424275 (-0.356768) | 0.010910 / 0.007607 (0.003303) | 0.532606 / 0.226044 (0.306562) | 5.345179 / 2.268929 (3.076250) | 2.368077 / 55.444624 (-53.076548) | 2.028913 / 6.876477 (-4.847564) | 2.147621 / 2.142072 (0.005549) | 0.675696 / 4.805227 (-4.129531) | 0.134902 / 6.500664 (-6.365762) | 0.065004 / 0.075469 (-0.010465) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.233412 / 1.841788 (-0.608376) | 13.767465 / 8.074308 (5.693157) | 13.933653 / 10.191392 (3.742261) | 0.129010 / 0.680424 (-0.551414) | 0.016708 / 0.534201 (-0.517493) | 0.362341 / 0.579283 (-0.216942) | 0.390902 / 0.434364 (-0.043462) | 0.429156 / 0.540337 (-0.111182) | 0.521166 / 1.386936 (-0.865770) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006169 / 0.011353 (-0.005184) | 0.003839 / 0.011008 (-0.007169) | 0.078784 / 0.038508 (0.040276) | 0.040218 / 0.023109 (0.017109) | 0.360439 / 0.275898 (0.084541) | 0.423957 / 0.323480 (0.100477) | 0.003456 / 0.007986 (-0.004529) | 0.002900 / 0.004328 (-0.001428) | 0.078820 / 0.004250 (0.074569) | 0.047240 / 0.037052 (0.010187) | 0.372081 / 0.258489 (0.113592) | 0.424263 / 0.293841 (0.130422) | 0.027977 / 0.128546 (-0.100569) | 0.008400 / 0.075646 (-0.067246) | 0.084399 / 0.419271 (-0.334872) | 0.043303 / 0.043533 (-0.000230) | 0.361583 / 0.255139 (0.106444) | 0.394987 / 0.283200 (0.111787) | 0.020006 / 0.141683 (-0.121677) | 1.520208 / 1.452155 (0.068053) | 1.587335 / 1.492716 (0.094619) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223847 / 0.018006 (0.205840) | 0.402194 / 0.000490 (0.401704) | 0.000384 / 0.000200 (0.000184) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024902 / 0.037411 (-0.012509) | 0.099076 / 0.014526 (0.084550) | 0.108041 / 0.176557 (-0.068516) | 0.159385 / 0.737135 (-0.577750) | 0.111442 / 0.296338 (-0.184896) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446232 / 0.215209 (0.231023) | 4.464927 / 2.077655 (2.387272) | 2.155234 / 1.504120 (0.651114) | 1.953645 / 1.541195 (0.412450) | 1.965991 / 1.468490 (0.497501) | 0.553473 / 4.584777 (-4.031304) | 3.321397 / 3.745712 (-0.424315) | 1.693761 / 5.269862 (-3.576101) | 1.006299 / 4.565676 (-3.559378) | 0.067013 / 0.424275 (-0.357262) | 0.011116 / 0.007607 (0.003509) | 0.555014 / 0.226044 (0.328970) | 5.535694 / 2.268929 (3.266765) | 2.598339 / 55.444624 (-52.846285) | 2.249298 / 6.876477 (-4.627179) | 2.243419 / 2.142072 (0.101347) | 0.667603 / 4.805227 (-4.137624) | 0.133322 / 6.500664 (-6.367343) | 0.065473 / 0.075469 (-0.009996) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.293051 / 1.841788 (-0.548737) | 14.103731 / 8.074308 (6.029423) | 14.215204 / 10.191392 (4.023812) | 0.143990 / 0.680424 (-0.536434) | 0.016805 / 0.534201 (-0.517396) | 0.363264 / 0.579283 (-0.216019) | 0.392769 / 0.434364 (-0.041594) | 0.425291 / 0.540337 (-0.115046) | 0.515479 / 1.386936 (-0.871457) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006346 / 0.011353 (-0.005006) | 0.004130 / 0.011008 (-0.006878) | 0.096898 / 0.038508 (0.058390) | 0.042564 / 0.023109 (0.019455) | 0.343748 / 0.275898 (0.067850) | 0.412515 / 0.323480 (0.089035) | 0.006153 / 0.007986 (-0.001833) | 0.003345 / 0.004328 (-0.000984) | 0.075314 / 0.004250 (0.071064) | 0.061478 / 0.037052 (0.024426) | 0.362948 / 0.258489 (0.104459) | 0.401533 / 0.293841 (0.107692) | 0.032363 / 0.128546 (-0.096184) | 0.008780 / 0.075646 (-0.066867) | 0.328691 / 0.419271 (-0.090580) | 0.054253 / 0.043533 (0.010721) | 0.340783 / 0.255139 (0.085644) | 0.360705 / 0.283200 (0.077505) | 0.023183 / 0.141683 (-0.118500) | 1.484078 / 1.452155 (0.031924) | 1.528581 / 1.492716 (0.035865) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208732 / 0.018006 (0.190726) | 0.452572 / 0.000490 (0.452082) | 0.002936 / 0.000200 (0.002737) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024616 / 0.037411 (-0.012795) | 0.107547 / 0.014526 (0.093021) | 0.114492 / 0.176557 (-0.062065) | 0.171770 / 0.737135 (-0.565365) | 0.122538 / 0.296338 (-0.173800) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406140 / 0.215209 (0.190930) | 4.062391 / 2.077655 (1.984736) | 1.865962 / 1.504120 (0.361842) | 1.682236 / 1.541195 (0.141041) | 1.738119 / 1.468490 (0.269629) | 0.532244 / 4.584777 (-4.052533) | 3.816421 / 3.745712 (0.070709) | 2.981205 / 5.269862 (-2.288656) | 1.519497 / 4.565676 (-3.046179) | 0.065904 / 0.424275 (-0.358371) | 0.011277 / 0.007607 (0.003670) | 0.512789 / 0.226044 (0.286745) | 5.107618 / 2.268929 (2.838690) | 2.419399 / 55.444624 (-53.025226) | 2.079262 / 6.876477 (-4.797214) | 2.150447 / 2.142072 (0.008375) | 0.696737 / 4.805227 (-4.108490) | 0.142497 / 6.500664 (-6.358167) | 0.063521 / 0.075469 (-0.011949) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180692 / 1.841788 (-0.661095) | 14.343084 / 8.074308 (6.268776) | 13.303719 / 10.191392 (3.112327) | 0.164234 / 0.680424 (-0.516190) | 0.017439 / 0.534201 (-0.516762) | 0.399712 / 0.579283 (-0.179571) | 0.428248 / 0.434364 (-0.006115) | 0.471909 / 0.540337 (-0.068428) | 0.573853 / 1.386936 (-0.813083) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006210 / 0.011353 (-0.005143) | 0.004104 / 0.011008 (-0.006905) | 0.075140 / 0.038508 (0.036632) | 0.044647 / 0.023109 (0.021538) | 0.370120 / 0.275898 (0.094222) | 0.452936 / 0.323480 (0.129457) | 0.003943 / 0.007986 (-0.004042) | 0.003285 / 0.004328 (-0.001043) | 0.075267 / 0.004250 (0.071017) | 0.055517 / 0.037052 (0.018465) | 0.396385 / 0.258489 (0.137896) | 0.447870 / 0.293841 (0.154029) | 0.031342 / 0.128546 (-0.097204) | 0.008720 / 0.075646 (-0.066926) | 0.082702 / 0.419271 (-0.336570) | 0.051010 / 0.043533 (0.007477) | 0.350546 / 0.255139 (0.095407) | 0.425395 / 0.283200 (0.142195) | 0.024483 / 0.141683 (-0.117200) | 1.467341 / 1.452155 (0.015186) | 1.537187 / 1.492716 (0.044471) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218067 / 0.018006 (0.200061) | 0.441603 / 0.000490 (0.441114) | 0.003711 / 0.000200 (0.003512) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028669 / 0.037411 (-0.008742) | 0.112941 / 0.014526 (0.098415) | 0.122584 / 0.176557 (-0.053972) | 0.176494 / 0.737135 (-0.560641) | 0.129369 / 0.296338 (-0.166970) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434543 / 0.215209 (0.219334) | 4.344056 / 2.077655 (2.266401) | 2.079286 / 1.504120 (0.575166) | 1.887264 / 1.541195 (0.346069) | 1.910386 / 1.468490 (0.441896) | 0.538824 / 4.584777 (-4.045953) | 3.844786 / 3.745712 (0.099074) | 2.902091 / 5.269862 (-2.367770) | 1.270852 / 4.565676 (-3.294824) | 0.066324 / 0.424275 (-0.357951) | 0.011346 / 0.007607 (0.003739) | 0.537122 / 0.226044 (0.311078) | 5.367354 / 2.268929 (3.098426) | 2.533672 / 55.444624 (-52.910952) | 2.203260 / 6.876477 (-4.673217) | 2.224310 / 2.142072 (0.082237) | 0.663806 / 4.805227 (-4.141422) | 0.142758 / 6.500664 (-6.357906) | 0.063870 / 0.075469 (-0.011599) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260487 / 1.841788 (-0.581301) | 14.800106 / 8.074308 (6.725798) | 13.993488 / 10.191392 (3.802096) | 0.165829 / 0.680424 (-0.514595) | 0.017347 / 0.534201 (-0.516854) | 0.401819 / 0.579283 (-0.177464) | 0.424577 / 0.434364 (-0.009787) | 0.475161 / 0.540337 (-0.065176) | 0.574659 / 1.386936 (-0.812277) |\n\n</details>\n</details>\n\n\n"
] | 2023-06-19T16:56:06 | 2023-06-19T17:29:11 | 2023-06-19T17:22:10 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5966",
"html_url": "https://github.com/huggingface/datasets/pull/5966",
"diff_url": "https://github.com/huggingface/datasets/pull/5966.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5966.patch",
"merged_at": "2023-06-19T17:22:10"
}
|
Related to changes made in https://github.com/iterative/dvc/pull/9475
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5966/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5965
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5965/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5965/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5965/events
|
https://github.com/huggingface/datasets/issues/5965
| 1,763,648,540 |
I_kwDODunzps5pHyQc
| 5,965 |
"Couldn't cast array of type" in complex datasets
|
{
"login": "piercefreeman",
"id": 1712066,
"node_id": "MDQ6VXNlcjE3MTIwNjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1712066?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/piercefreeman",
"html_url": "https://github.com/piercefreeman",
"followers_url": "https://api.github.com/users/piercefreeman/followers",
"following_url": "https://api.github.com/users/piercefreeman/following{/other_user}",
"gists_url": "https://api.github.com/users/piercefreeman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/piercefreeman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piercefreeman/subscriptions",
"organizations_url": "https://api.github.com/users/piercefreeman/orgs",
"repos_url": "https://api.github.com/users/piercefreeman/repos",
"events_url": "https://api.github.com/users/piercefreeman/events{/privacy}",
"received_events_url": "https://api.github.com/users/piercefreeman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks for reporting! \r\n\r\nSpecifying the target features explicitly should avoid this error:\r\n```python\r\ndataset = dataset.map(\r\n batch_process,\r\n batched=True,\r\n batch_size=1,\r\n num_proc=1,\r\n remove_columns=dataset.column_names,\r\n features=datasets.Features({\"texts\": datasets.Sequence(datasets.Value(\"string\"))})\r\n)\r\n```\r\n\r\nThis error stems from our type promotion not handling the nested case. But this promotion/casting allocates memory in most scenarios, which can be problematic for large datasets, so explicitly passing the features is the optimal solution.",
"Hi @mariosasko thanks for the context, this is helpful to know. Would it be worth having some logic to generate this explicit feature specification automatically if a type annotation for a .map returns a dataclass that can be inferred?\r\n\r\nFeels like something that would be easy to implement and could save memory / deal with this case in a standardized way.",
"> . Would it be worth having some logic to generate this explicit feature specification automatically if a type annotation for a .map returns a dataclass that can be inferred?\r\n\r\nInteresting proposal! Yes, we could consider doing this if the (return) type hint is `TypedDict`, and raise an error that type hints are incorrect if the cast using the inferred types fails.",
"@mariosasko Put up an initial PR to implement this proposal. Let me know your thoughts on direction and what else should be in-scope here."
] | 2023-06-19T14:16:14 | 2023-07-26T15:13:53 | 2023-07-26T15:13:53 |
NONE
| null | null | null |
### Describe the bug
When doing a map of a dataset with complex types, sometimes `datasets` is unable to interpret the valid schema of a returned datasets.map() function. This often comes from conflicting types, like when both empty lists and filled lists are competing for the same field value.
This is prone to happen in batch mapping, when the mapper returns a sequence of null/empty values and other batches are non-null. A workaround is to manually cast the new batch to a pyarrow table (like implemented in this [workaround](https://github.com/piercefreeman/lassen/pull/3)) but it feels like this ideally should be solved at the core library level.
Note that the reproduction case only throws this error if the first datapoint has the empty list. If it is processed later, datasets already detects its representation as list-type and therefore allows the empty list to be provided.
### Steps to reproduce the bug
A trivial reproduction case:
```python
from typing import Iterator, Any
import pandas as pd
from datasets import Dataset
def batch_to_examples(batch: dict[str, list[Any]]) -> Iterator[dict[str, Any]]:
for i in range(next(iter(lengths))):
yield {feature: values[i] for feature, values in batch.items()}
def examples_to_batch(examples) -> dict[str, list[Any]]:
batch = {}
for example in examples:
for feature, value in example.items():
if feature not in batch:
batch[feature] = []
batch[feature].append(value)
return batch
def batch_process(examples, explicit_schema: bool):
new_examples = []
for example in batch_to_examples(examples):
new_examples.append(dict(texts=example["raw_text"].split()))
return examples_to_batch(new_examples)
df = pd.DataFrame(
[
{"raw_text": ""},
{"raw_text": "This is a test"},
{"raw_text": "This is another test"},
]
)
dataset = Dataset.from_pandas(df)
# datasets won't be able to typehint a dataset that starts with an empty example.
with pytest.raises(TypeError, match="Couldn't cast array of type"):
dataset = dataset.map(
batch_process,
batched=True,
batch_size=1,
num_proc=1,
remove_columns=dataset.column_names,
)
```
This results in crashes like:
```bash
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1819, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 2109, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1819, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1998, in array_cast
raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}")
TypeError: Couldn't cast array of type string to null
```
### Expected behavior
The code should successfully map and create a new dataset without error.
### Environment info
Mac OSX, Linux
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5965/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5964
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5964/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5964/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5964/events
|
https://github.com/huggingface/datasets/pull/5964
| 1,763,513,574 |
PR_kwDODunzps5TVweZ
| 5,964 |
Always return list in `list_datasets`
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006795 / 0.011353 (-0.004558) | 0.004170 / 0.011008 (-0.006838) | 0.098698 / 0.038508 (0.060190) | 0.045393 / 0.023109 (0.022284) | 0.309205 / 0.275898 (0.033307) | 0.361333 / 0.323480 (0.037853) | 0.006009 / 0.007986 (-0.001977) | 0.003334 / 0.004328 (-0.000995) | 0.075071 / 0.004250 (0.070821) | 0.062587 / 0.037052 (0.025535) | 0.322395 / 0.258489 (0.063906) | 0.360499 / 0.293841 (0.066659) | 0.032243 / 0.128546 (-0.096303) | 0.008768 / 0.075646 (-0.066878) | 0.329799 / 0.419271 (-0.089472) | 0.062261 / 0.043533 (0.018728) | 0.298112 / 0.255139 (0.042973) | 0.322815 / 0.283200 (0.039615) | 0.032348 / 0.141683 (-0.109335) | 1.445807 / 1.452155 (-0.006347) | 1.528768 / 1.492716 (0.036051) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195701 / 0.018006 (0.177695) | 0.437042 / 0.000490 (0.436552) | 0.003867 / 0.000200 (0.003667) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026713 / 0.037411 (-0.010698) | 0.109548 / 0.014526 (0.095022) | 0.119216 / 0.176557 (-0.057341) | 0.178947 / 0.737135 (-0.558188) | 0.125224 / 0.296338 (-0.171114) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400885 / 0.215209 (0.185676) | 3.991223 / 2.077655 (1.913568) | 1.818449 / 1.504120 (0.314329) | 1.609285 / 1.541195 (0.068090) | 1.666675 / 1.468490 (0.198184) | 0.531486 / 4.584777 (-4.053291) | 3.770142 / 3.745712 (0.024430) | 3.057189 / 5.269862 (-2.212673) | 1.517491 / 4.565676 (-3.048186) | 0.065782 / 0.424275 (-0.358493) | 0.011251 / 0.007607 (0.003644) | 0.504277 / 0.226044 (0.278233) | 5.038979 / 2.268929 (2.770050) | 2.254717 / 55.444624 (-53.189908) | 1.929743 / 6.876477 (-4.946734) | 2.080051 / 2.142072 (-0.062022) | 0.656831 / 4.805227 (-4.148396) | 0.142860 / 6.500664 (-6.357804) | 0.063057 / 0.075469 (-0.012412) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208819 / 1.841788 (-0.632969) | 14.456966 / 8.074308 (6.382658) | 12.839799 / 10.191392 (2.648407) | 0.164361 / 0.680424 (-0.516063) | 0.017330 / 0.534201 (-0.516871) | 0.397384 / 0.579283 (-0.181899) | 0.422704 / 0.434364 (-0.011660) | 0.472065 / 0.540337 (-0.068273) | 0.576960 / 1.386936 (-0.809976) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006950 / 0.011353 (-0.004403) | 0.004012 / 0.011008 (-0.006997) | 0.076050 / 0.038508 (0.037542) | 0.046646 / 0.023109 (0.023537) | 0.353813 / 0.275898 (0.077915) | 0.417111 / 0.323480 (0.093631) | 0.005422 / 0.007986 (-0.002564) | 0.003356 / 0.004328 (-0.000972) | 0.076662 / 0.004250 (0.072411) | 0.055018 / 0.037052 (0.017966) | 0.371561 / 0.258489 (0.113072) | 0.410471 / 0.293841 (0.116630) | 0.031860 / 0.128546 (-0.096686) | 0.008754 / 0.075646 (-0.066893) | 0.083192 / 0.419271 (-0.336079) | 0.050479 / 0.043533 (0.006946) | 0.351725 / 0.255139 (0.096586) | 0.371596 / 0.283200 (0.088396) | 0.023042 / 0.141683 (-0.118641) | 1.480533 / 1.452155 (0.028379) | 1.545970 / 1.492716 (0.053254) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220095 / 0.018006 (0.202089) | 0.441550 / 0.000490 (0.441061) | 0.000375 / 0.000200 (0.000175) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029527 / 0.037411 (-0.007884) | 0.111645 / 0.014526 (0.097119) | 0.125732 / 0.176557 (-0.050825) | 0.177322 / 0.737135 (-0.559813) | 0.128620 / 0.296338 (-0.167718) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432415 / 0.215209 (0.217206) | 4.314381 / 2.077655 (2.236726) | 2.079450 / 1.504120 (0.575331) | 1.893139 / 1.541195 (0.351944) | 1.951363 / 1.468490 (0.482873) | 0.531466 / 4.584777 (-4.053311) | 3.716860 / 3.745712 (-0.028852) | 1.850111 / 5.269862 (-3.419750) | 1.100676 / 4.565676 (-3.465000) | 0.066247 / 0.424275 (-0.358028) | 0.011503 / 0.007607 (0.003896) | 0.537208 / 0.226044 (0.311164) | 5.367560 / 2.268929 (3.098631) | 2.543697 / 55.444624 (-52.900927) | 2.221670 / 6.876477 (-4.654806) | 2.252009 / 2.142072 (0.109937) | 0.658509 / 4.805227 (-4.146718) | 0.142345 / 6.500664 (-6.358319) | 0.064701 / 0.075469 (-0.010768) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266442 / 1.841788 (-0.575346) | 15.105953 / 8.074308 (7.031645) | 14.288229 / 10.191392 (4.096837) | 0.161182 / 0.680424 (-0.519242) | 0.017074 / 0.534201 (-0.517127) | 0.399464 / 0.579283 (-0.179819) | 0.419459 / 0.434364 (-0.014905) | 0.467553 / 0.540337 (-0.072784) | 0.566337 / 1.386936 (-0.820599) |\n\n</details>\n</details>\n\n\n"
] | 2023-06-19T13:07:08 | 2023-06-19T17:29:37 | 2023-06-19T17:22:41 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5964",
"html_url": "https://github.com/huggingface/datasets/pull/5964",
"diff_url": "https://github.com/huggingface/datasets/pull/5964.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5964.patch",
"merged_at": "2023-06-19T17:22:41"
}
|
Fix #5925
Plus, deprecate `list_datasets`/`inspect_dataset` in favor of `huggingface_hub.list_datasets`/"git clone workflow" (downloads data files)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5964/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5963
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5963/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5963/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5963/events
|
https://github.com/huggingface/datasets/issues/5963
| 1,762,774,457 |
I_kwDODunzps5pEc25
| 5,963 |
Got an error _pickle.PicklingError use Dataset.from_spark.
|
{
"login": "yanzia12138",
"id": 112800614,
"node_id": "U_kgDOBrkzZg",
"avatar_url": "https://avatars.githubusercontent.com/u/112800614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanzia12138",
"html_url": "https://github.com/yanzia12138",
"followers_url": "https://api.github.com/users/yanzia12138/followers",
"following_url": "https://api.github.com/users/yanzia12138/following{/other_user}",
"gists_url": "https://api.github.com/users/yanzia12138/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanzia12138/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanzia12138/subscriptions",
"organizations_url": "https://api.github.com/users/yanzia12138/orgs",
"repos_url": "https://api.github.com/users/yanzia12138/repos",
"events_url": "https://api.github.com/users/yanzia12138/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanzia12138/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"i got error using method from_spark when using multi-node Spark cluster. seems could only use \"from_spark\" in local?",
"@lhoestq ",
"cc @maddiedawson it looks like there an issue with `_validate_cache_dir` ?\r\n\r\nIt looks like the function passed to mapPartitions has a reference to the Spark dataset builder, and therefore contains the SparkContext itself.\r\n\r\nI think it can be fixed by defining `create_cache_and_write_probe` outside the Spark dataset builder, and pass a `partial(create_cache_and_write_probe, cache_dir=self._cache_dir)` to `mapPartitions`",
"Just saw this; thanks for flagging! Your proposed solution sounds good. I can prepare a PR",
"@maddiedawson can you show me the demo ,so i can test in local .before your PR"
] | 2023-06-19T05:30:35 | 2023-07-24T11:55:46 | 2023-07-24T11:55:46 |
NONE
| null | null | null |
python 3.9.2
Got an error _pickle.PicklingError use Dataset.from_spark.
Did the dataset import load data from spark dataframe using multi-node Spark cluster
df = spark.read.parquet(args.input_data).repartition(50)
ds = Dataset.from_spark(df, keep_in_memory=True,
cache_dir="/pnc-data/data/nuplan/t5_spark/cache_data")
ds.save_to_disk(args.output_data)
Error :
_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforma
tion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
23/06/16 21:17:20 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)
_Originally posted by @yanzia12138 in https://github.com/huggingface/datasets/issues/5701#issuecomment-1594674306_
W
Traceback (most recent call last):
File "/home/work/main.py", line 100, in <module>
run(args)
File "/home/work/main.py", line 80, in run
ds = Dataset.from_spark(df1, keep_in_memory=True,
File "/home/work/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1281, in from_spark
return SparkDatasetReader(
File "/home/work/.local/lib/python3.9/site-packages/datasets/io/spark.py", line 53, in read
self.builder.download_and_prepare(
File "/home/work/.local/lib/python3.9/site-packages/datasets/builder.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/work/.local/lib/python3.9/site-packages/datasets/builder.py", line 1004, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/work/.local/lib/python3.9/site-packages/datasets/packaged_modules/spark/spark.py", line 254, in _prepare_split
self._validate_cache_dir()
File "/home/work/.local/lib/python3.9/site-packages/datasets/packaged_modules/spark/spark.py", line 122, in _validate_cache_dir
self._spark.sparkContext.parallelize(range(1), 1).mapPartitions(create_cache_and_write_probe).collect()
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 950, in collect
sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2951, in _jrdd
wrapped_func = _wrap_function(self.ctx, self.func, self._prev_jrdd_deserializer,
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2830, in _wrap_function
pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2816, in _prepare_for_python_RDD
pickled_command = ser.dumps(command)
File "/home/work/.local/lib/python3.9/site-packages/pyspark/serializers.py", line 447, in dumps
raise pickle.PicklingError(msg)
_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. S
parkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
23/06/19 13:51:21 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5963/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5962
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5962/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5962/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5962/events
|
https://github.com/huggingface/datasets/issues/5962
| 1,761,589,882 |
I_kwDODunzps5o_7p6
| 5,962 |
Issue with train_test_split maintaining the same underlying PyArrow Table
|
{
"login": "Oziel14",
"id": 70730520,
"node_id": "MDQ6VXNlcjcwNzMwNTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/70730520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oziel14",
"html_url": "https://github.com/Oziel14",
"followers_url": "https://api.github.com/users/Oziel14/followers",
"following_url": "https://api.github.com/users/Oziel14/following{/other_user}",
"gists_url": "https://api.github.com/users/Oziel14/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oziel14/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oziel14/subscriptions",
"organizations_url": "https://api.github.com/users/Oziel14/orgs",
"repos_url": "https://api.github.com/users/Oziel14/repos",
"events_url": "https://api.github.com/users/Oziel14/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oziel14/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[] | 2023-06-17T02:19:58 | 2023-06-17T02:19:58 | null |
NONE
| null | null | null |
### Describe the bug
I've been using the train_test_split method in the datasets module to split my HuggingFace Dataset into separate training, validation, and testing subsets. However, I've noticed an issue where the split datasets appear to maintain the same underlying PyArrow Table.
### Steps to reproduce the bug
1. Load any dataset ```dataset = load_dataset("lhoestq/demo1")```
2. Try the next code:
```python
from datasets import Dataset, DatasetDict
train_size = 0.6
split_train = dataset["train"].train_test_split(
train_size=train_size,
)
separate_dataset_dict = DatasetDict({
"train": split_train["train"],
"test": split_train["test"],
})
```
3. The next code ```print(separate_dataset_dict)``` when printing the dataset it gives the indication that they have 3 and 2 rows respectively.
4. But the next code:
```python
print(len(separate_dataset_dict["train"].data['id']))
print(len(separate_dataset_dict["test"].data['id']))
```
Indicates that both tables still have 5 rows.
### Expected behavior
However, I've noticed that train_test_split["train"].data, test_val_split["train"].data, and test_val_split["test"].data are identical, suggesting that they all point to the same underlying PyArrow Table. This means that the split datasets are not independent, as I expected.
I believe this is a bug in the train_test_split implementation, as I would expect this function to return datasets with separate underlying PyArrow Tables. Could you please help me understand if this is expected behavior, or if there's a workaround to create truly independent split datasets?
I would appreciate any assistance with this issue. Thank you.
### Environment info
I tried in Colab:
- `datasets` version: 2.13.0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
and my PC:
- `datasets` version: 2.13.0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5962/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5961
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5961/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5961/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5961/events
|
https://github.com/huggingface/datasets/issues/5961
| 1,758,525,111 |
I_kwDODunzps5o0Pa3
| 5,961 |
IterableDataset: split by node and map may preprocess samples that will be skipped anyway
|
{
"login": "johnchienbronci",
"id": 27708347,
"node_id": "MDQ6VXNlcjI3NzA4MzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27708347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnchienbronci",
"html_url": "https://github.com/johnchienbronci",
"followers_url": "https://api.github.com/users/johnchienbronci/followers",
"following_url": "https://api.github.com/users/johnchienbronci/following{/other_user}",
"gists_url": "https://api.github.com/users/johnchienbronci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnchienbronci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnchienbronci/subscriptions",
"organizations_url": "https://api.github.com/users/johnchienbronci/orgs",
"repos_url": "https://api.github.com/users/johnchienbronci/repos",
"events_url": "https://api.github.com/users/johnchienbronci/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnchienbronci/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"Does \"number of shards\" refer to the total number of data?\r\n\r\nmy config:\r\nnproc_per_node=2\r\nds=ds['train'] = load_dataset(streaming=True).take(50000)\r\n\r\nI'm test again: in prepare_data(), data have the same for each GPU\r\n",
"The number of shards is `ds.n_shards`. It corresponds generally to the number of files the dataset is made of, to be able to distribute to several nodes.\r\n\r\n**You don't end up with the same data per GPU**. But all the samples are going through your preprocessing function you pass to map. They are just skipped afterwards to only keep 1 sample out of n(GPUs)",
"For each GPU, although see the same data in prepare_data(), the actual training data will not be the same in the end. \r\nIs my understanding correct?\r\n\r\nWhere can I print the actual training data for each GPU?",
"> For each GPU, although see the same data in prepare_data(), the actual training data will not be the same in the end.\r\nIs my understanding correct?\r\n\r\nYes exactly :)\r\n\r\n> Where can I print the actual training data for each GPU?\r\n\r\nYou should call print in the data_collator",
"I print out n_shards, and under multiple GPUs, this value is always 1.\r\nIs this value correct?",
"Yes it's correct, and it explains why you always have the same data passed to your map function (the data can't be split).\r\n\r\nBut after being passed to `map`, each GPU keeps one example out of n(GPUs) so that you don't end up with duplicate data across GPUs",
"> > For each GPU, although see the same data in prepare_data(), the actual training data will not be the same in the end.\r\n> > Is my understanding correct?\r\n> \r\n> Yes exactly :)\r\n> \r\n> > Where can I print the actual training data for each GPU?\r\n> \r\n> You should call print in the data_collator\r\n\r\nOK, when printing the train data in the data collator, each GPU sees different data.\r\n\r\nThanks for your reply",
"Do we have a solution for this one? Or it's required to get \"number of shards is a factor of number of GPUs: in that case the shards are evenly distributed per GPU\"",
"For now it's required to have a number of shards that is a factor of the number of GPUs to not have all the workers process the same data (and then skip the right ones to not end up training on duplicate data).\r\n\r\nIt would be quite complex to implement a strategy that would utilize all the GPUs with an arbitrary number of shards even at the end of training"
] | 2023-06-15T10:29:10 | 2023-09-01T10:35:11 | null |
NONE
| null | null | null |
There are two ways an iterable dataset can be split by node:
1. if the number of shards is a factor of number of GPUs: in that case the shards are evenly distributed per GPU
2. otherwise, each GPU iterate on the data and at the end keeps 1 sample out of n(GPUs) - skipping the others.
In case 2. it's therefore possible to have the same examples passed to `prepare_dataset` for each GPU.
This doesn't sound optimized though, because it runs the preprocessing on samples that won't be used in the end.
Could you open a new issue so that we can discuss about this and find a solution ?
_Originally posted by @lhoestq in https://github.com/huggingface/datasets/issues/5360#issuecomment-1592729051_
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5961/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5959
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5959/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5959/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5959/events
|
https://github.com/huggingface/datasets/issues/5959
| 1,757,397,507 |
I_kwDODunzps5ov8ID
| 5,959 |
read metric glue.py from local file
|
{
"login": "JiazhaoLi",
"id": 31148397,
"node_id": "MDQ6VXNlcjMxMTQ4Mzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/31148397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JiazhaoLi",
"html_url": "https://github.com/JiazhaoLi",
"followers_url": "https://api.github.com/users/JiazhaoLi/followers",
"following_url": "https://api.github.com/users/JiazhaoLi/following{/other_user}",
"gists_url": "https://api.github.com/users/JiazhaoLi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JiazhaoLi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JiazhaoLi/subscriptions",
"organizations_url": "https://api.github.com/users/JiazhaoLi/orgs",
"repos_url": "https://api.github.com/users/JiazhaoLi/repos",
"events_url": "https://api.github.com/users/JiazhaoLi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JiazhaoLi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Sorry, I solve this by call `evaluate.load('glue_metric.py','sst-2')`\r\n"
] | 2023-06-14T17:59:35 | 2023-06-14T18:04:16 | 2023-06-14T18:04:16 |
NONE
| null | null | null |
### Describe the bug
Currently, The server is off-line. I am using the glue metric from the local file downloaded from the hub.
I download / cached datasets using `load_dataset('glue','sst2', cache_dir='/xxx')` to cache them and then in the off-line mode, I use `load_dataset('xxx/glue.py','sst2', cache_dir='/xxx')`. I can successfully reuse cached datasets.
My problem is about the load_metric.
When I run `load_dataset('xxx/glue_metric.py','sst2',cache_dir='/xxx')` , it returns
` File "xx/lib64/python3.9/site-packages/datasets/utils/deprecation_utils.py", line 46, in wrapper
return deprecated_function(*args, **kwargs)
File "xx//lib64/python3.9/site-packages/datasets/load.py", line 1392, in load_metric
metric = metric_cls(
TypeError: 'NoneType' object is not callable`
Thanks in advance for help!
### Steps to reproduce the bug
N/A
### Expected behavior
N/A
### Environment info
`datasets == 2.12.0`
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5959/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5958
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5958/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5958/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5958/events
|
https://github.com/huggingface/datasets/pull/5958
| 1,757,265,971 |
PR_kwDODunzps5TA3__
| 5,958 |
set dev version
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-14T16:26:34 | 2023-06-14T16:34:55 | 2023-06-14T16:26:51 |
MEMBER
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5958",
"html_url": "https://github.com/huggingface/datasets/pull/5958",
"diff_url": "https://github.com/huggingface/datasets/pull/5958.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5958.patch",
"merged_at": "2023-06-14T16:26:51"
}
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5958/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5957
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5957/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5957/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5957/events
|
https://github.com/huggingface/datasets/pull/5957
| 1,757,252,466 |
PR_kwDODunzps5TA1EB
| 5,957 |
Release: 2.13.0
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-14T16:17:26 | 2023-06-14T16:33:39 | 2023-06-14T16:24:39 |
MEMBER
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5957",
"html_url": "https://github.com/huggingface/datasets/pull/5957",
"diff_url": "https://github.com/huggingface/datasets/pull/5957.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5957.patch",
"merged_at": "2023-06-14T16:24:39"
}
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5957/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5956
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5956/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5956/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5956/events
|
https://github.com/huggingface/datasets/pull/5956
| 1,756,959,367 |
PR_kwDODunzps5S_1o2
| 5,956 |
Fix ArrowExamplesIterable.shard_data_sources
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-14T13:50:38 | 2023-06-14T14:43:12 | 2023-06-14T14:33:45 |
MEMBER
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5956",
"html_url": "https://github.com/huggingface/datasets/pull/5956",
"diff_url": "https://github.com/huggingface/datasets/pull/5956.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5956.patch",
"merged_at": "2023-06-14T14:33:45"
}
|
ArrowExamplesIterable.shard_data_sources was outdated
I also fixed a warning message by not using format_type= in with_format()
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5956/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5955
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5955/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5955/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5955/events
|
https://github.com/huggingface/datasets/issues/5955
| 1,756,827,133 |
I_kwDODunzps5otw39
| 5,955 |
Strange bug in loading local JSON files, using load_dataset
|
{
"login": "Night-Quiet",
"id": 73934131,
"node_id": "MDQ6VXNlcjczOTM0MTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/73934131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Night-Quiet",
"html_url": "https://github.com/Night-Quiet",
"followers_url": "https://api.github.com/users/Night-Quiet/followers",
"following_url": "https://api.github.com/users/Night-Quiet/following{/other_user}",
"gists_url": "https://api.github.com/users/Night-Quiet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Night-Quiet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Night-Quiet/subscriptions",
"organizations_url": "https://api.github.com/users/Night-Quiet/orgs",
"repos_url": "https://api.github.com/users/Night-Quiet/repos",
"events_url": "https://api.github.com/users/Night-Quiet/events{/privacy}",
"received_events_url": "https://api.github.com/users/Night-Quiet/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-14T12:46:00 | 2023-06-21T14:42:15 | 2023-06-21T14:42:15 |
NONE
| null | null | null |
### Describe the bug
I am using 'load_dataset 'loads a JSON file, but I found a strange bug: an error will be reported when the length of the JSON file exceeds 160000 (uncertain exact number). I have checked the data through the following code and there are no issues. So I cannot determine the true reason for this error.
The data is a list containing a dictionary. As follows:
[
{'input': 'someting...', 'target': 'someting...', 'type': 'someting...', 'history': ['someting...', ...]},
...
]
### Steps to reproduce the bug
```
import json
from datasets import load_dataset
path = "target.json"
temp_path = "temp.json"
with open(path, "r") as f:
data = json.load(f)
print(f"\n-------the JSON file length is: {len(data)}-------\n")
with open(temp_path, "w") as f:
json.dump(data[:160000], f)
dataset = load_dataset("json", data_files=temp_path)
print("\n-------This works when the JSON file length is 160000-------\n")
with open(temp_path, "w") as f:
json.dump(data[160000:], f)
dataset = load_dataset("json", data_files=temp_path)
print("\n-------This works and eliminates data issues-------\n")
with open(temp_path, "w") as f:
json.dump(data[:170000], f)
dataset = load_dataset("json", data_files=temp_path)
```
### Expected behavior
```
-------the JSON file length is: 173049-------
Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-acf3c7f418c5f4b4/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...
Downloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 3328.81it/s]
Extracting data files: 100%|█████████████████████| 1/1 [00:00<00:00, 639.47it/s]
Dataset json downloaded and prepared to /root/.cache/huggingface/datasets/json/default-acf3c7f418c5f4b4/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4. Subsequent calls will reuse this data.
100%|████████████████████████████████████████████| 1/1 [00:00<00:00, 265.85it/s]
-------This works when the JSON file length is 160000-------
Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-a42f04b263ceea6a/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...
Downloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 2038.05it/s]
Extracting data files: 100%|█████████████████████| 1/1 [00:00<00:00, 794.83it/s]
Dataset json downloaded and prepared to /root/.cache/huggingface/datasets/json/default-a42f04b263ceea6a/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4. Subsequent calls will reuse this data.
100%|████████████████████████████████████████████| 1/1 [00:00<00:00, 681.00it/s]
-------This works and eliminates data issues-------
Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-63f391c89599c7b0/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...
Downloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 3682.44it/s]
Extracting data files: 100%|█████████████████████| 1/1 [00:00<00:00, 788.70it/s]
Generating train split: 0 examples [00:00, ? examples/s]Failed to read file '/home/lakala/hjc/code/pycode/glm/temp.json' with error <class 'pyarrow.lib.ArrowInvalid'>: cannot mix list and non-list, non-null values
Traceback (most recent call last):
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
for _, table in generator:
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py", line 146, in _generate_tables
raise ValueError(f"Not able to read records in the JSON file at {file}.") from None
ValueError: Not able to read records in the JSON file at /home/lakala/hjc/code/pycode/glm/temp.json.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/lakala/hjc/code/pycode/glm/test.py", line 22, in <module>
dataset = load_dataset("json", data_files=temp_path)
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 985, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 1746, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 1891, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Environment info
```
Ubuntu==22.04
python==3.8
pytorch-transformers==1.2.0
transformers== 4.27.1
datasets==2.12.0
numpy==1.24.3
pandas==1.5.3
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5955/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5954
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5954/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5954/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5954/events
|
https://github.com/huggingface/datasets/pull/5954
| 1,756,572,994 |
PR_kwDODunzps5S-hSP
| 5,954 |
Better filenotfound for gated
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-14T10:33:10 | 2023-06-14T12:33:27 | 2023-06-14T12:26:31 |
MEMBER
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5954",
"html_url": "https://github.com/huggingface/datasets/pull/5954",
"diff_url": "https://github.com/huggingface/datasets/pull/5954.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5954.patch",
"merged_at": "2023-06-14T12:26:31"
}
|
close https://github.com/huggingface/datasets/issues/5953
<img width="1292" alt="image" src="https://github.com/huggingface/datasets/assets/42851186/270fe5bc-1739-4878-b7bc-ab6d35336d4d">
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5954/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5953
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5953/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5953/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5953/events
|
https://github.com/huggingface/datasets/issues/5953
| 1,756,520,523 |
I_kwDODunzps5osmBL
| 5,953 |
Bad error message when trying to download gated dataset
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-14T10:03:39 | 2023-06-14T16:36:51 | 2023-06-14T12:26:32 |
MEMBER
| null | null | null |
### Describe the bug
When I attempt to download a model from the Hub that is gated without being logged in, I get a nice error message. E.g.:
E.g.
```sh
Repository Not Found for url: https://huggingface.co/api/models/DeepFloyd/IF-I-XL-v1.0.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password..
Will try to load from local cache.
```
If I do the same for a gated dataset on the Hub, I'm not gated a nice error message IMO:
```sh
File ~/hf/lib/python3.10/site-packages/fsspec/implementations/http.py:430, in HTTPFileSystem._info(self, url, **kwargs)
427 except Exception as exc:
428 if policy == "get":
429 # If get failed, then raise a FileNotFoundError
--> 430 raise FileNotFoundError(url) from exc
431 logger.debug(str(exc))
433 return {"name": url, "size": None, **info, "type": "file"}
FileNotFoundError: https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0/resolve/main/n_shards.json
```
### Steps to reproduce the bug
```
huggingface-cli logout
```
and then:
```py
from datasets import load_dataset, Audio
# English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
# Swahili
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "sw", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
sw_sample = next(iter(stream_data))["audio"]["array"]
```
### Expected behavior
Better error message
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.12.0
- Platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.0.dev0
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5953/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5952
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5952/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5952/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5952/events
|
https://github.com/huggingface/datasets/pull/5952
| 1,756,481,591 |
PR_kwDODunzps5S-OIh
| 5,952 |
Add Arrow builder docs
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-14T09:42:46 | 2023-06-14T14:42:31 | 2023-06-14T14:34:39 |
MEMBER
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5952",
"html_url": "https://github.com/huggingface/datasets/pull/5952",
"diff_url": "https://github.com/huggingface/datasets/pull/5952.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5952.patch",
"merged_at": "2023-06-14T14:34:39"
}
|
following https://github.com/huggingface/datasets/pull/5944
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5952/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5951
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5951/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5951/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5951/events
|
https://github.com/huggingface/datasets/issues/5951
| 1,756,363,546 |
I_kwDODunzps5or_sa
| 5,951 |
What is the Right way to use discofuse dataset??
|
{
"login": "akesh1235",
"id": 125154243,
"node_id": "U_kgDOB3Wzww",
"avatar_url": "https://avatars.githubusercontent.com/u/125154243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akesh1235",
"html_url": "https://github.com/akesh1235",
"followers_url": "https://api.github.com/users/akesh1235/followers",
"following_url": "https://api.github.com/users/akesh1235/following{/other_user}",
"gists_url": "https://api.github.com/users/akesh1235/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akesh1235/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akesh1235/subscriptions",
"organizations_url": "https://api.github.com/users/akesh1235/orgs",
"repos_url": "https://api.github.com/users/akesh1235/repos",
"events_url": "https://api.github.com/users/akesh1235/events{/privacy}",
"received_events_url": "https://api.github.com/users/akesh1235/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-14T08:38:39 | 2023-06-14T13:25:06 | 2023-06-14T12:10:16 |
NONE
| null | null | null |
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
**Below is the following way, as per my understanding , Is it correct :question: :question:**
The **columns/features from `DiscoFuse dataset`** that will be the **input to the `encoder` and `decoder`** are:
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
1. **coherent_first_sentence**
2. **coherent_second_sentence**
3. **incoherent_first_sentence**
4. **incoherent_second_sentence**
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
The **`encoder` will take these four columns as input and encode them into a sequence of hidden states. The `decoder` will then take these hidden states as input and decode them into a new sentence that fuses the two original sentences together.**
The **discourse type, connective_string, has_coref_type_pronoun, and has_coref_type_nominal columns will not be used as input to the encoder or decoder.** These columns are used to provide additional information about the dataset, but they are not necessary for the task of sentence fusion.
Please correct me if I am wrong; otherwise, if this understanding is right, how shall I implement this task practically?
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5951/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5950
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5950/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5950/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5950/events
|
https://github.com/huggingface/datasets/issues/5950
| 1,755,197,946 |
I_kwDODunzps5onjH6
| 5,950 |
Support for data with instance-wise dictionary as features
|
{
"login": "richardwth",
"id": 33274336,
"node_id": "MDQ6VXNlcjMzMjc0MzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/33274336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richardwth",
"html_url": "https://github.com/richardwth",
"followers_url": "https://api.github.com/users/richardwth/followers",
"following_url": "https://api.github.com/users/richardwth/following{/other_user}",
"gists_url": "https://api.github.com/users/richardwth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richardwth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richardwth/subscriptions",
"organizations_url": "https://api.github.com/users/richardwth/orgs",
"repos_url": "https://api.github.com/users/richardwth/repos",
"events_url": "https://api.github.com/users/richardwth/events{/privacy}",
"received_events_url": "https://api.github.com/users/richardwth/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-13T15:49:00 | 2023-06-14T12:13:38 | null |
NONE
| null | null | null |
### Feature request
I notice that when loading data instances with feature type of python dictionary, the dictionary keys would be broadcast so that every instance has the same set of keys. Please see an example in the Motivation section.
It is possible to avoid this behavior, i.e., load dictionary features as it is and do not broadcast the keys among instances? Please note that these dictionaries would have to be processed dynamically at each training iteration into strings (and tokenized).
### Motivation
I am trying to load a dataset from a json file. Each instance of the dataset has a feature that is a dictionary but its keys depend on the instance. Every two instances may have different keys. For example, imagine a dataset that contains a set of math expressions from a bunch of mutually redundant expressions:
```
{
"index": 0,
"feature": {
"2 * x + y >= 3": ["2 * x + y >= 3", "4 * x + 2 * y >= 6"],
...
}
},
...
{
"index": 9999,
"feature": {
"x >= 6": ["x >= 6", "x >= 0", "x >= -1"],
...
}
},
...
```
When directly loading the dataset using `data = load_dataset("json", data_files=file_paths, split='train')`, each instance would have all the keys from other instances and None as values. That is, instance of index 0 becomes:
```
{
"index": 0,
"feature": {
"2 * x + y >= 3": ["2 * x + y >= 3", "4 * x + 2 * y >= 6"],
...
"x >= 6": None, # keys from other instances
...
}
},
```
This is not desirable. Moreover, issue would be raised if I attempt to combine two such datasets using `data = concatenate_datasets(multi_datasets)`, perhaps because their dictionary features contain different keys.
A solution I can think of is to store the dictionary features as a long string, and evaluate it later. Please kindly suggest any other solution using existing methods of datasets.
### Your contribution
N/A
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5950/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5949
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5949/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5949/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5949/events
|
https://github.com/huggingface/datasets/pull/5949
| 1,754,843,717 |
PR_kwDODunzps5S4oPC
| 5,949 |
Replace metadata utils with `huggingface_hub`'s RepoCard API
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-13T13:03:19 | 2023-06-27T16:47:51 | 2023-06-27T16:38:32 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5949",
"html_url": "https://github.com/huggingface/datasets/pull/5949",
"diff_url": "https://github.com/huggingface/datasets/pull/5949.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5949.patch",
"merged_at": "2023-06-27T16:38:32"
}
|
Use `huggingface_hub`'s RepoCard API instead of `DatasetMetadata` for modifying the card's YAML, and deprecate `datasets.utils.metadata` and `datasets.utils.readme`.
After removing these modules, we can also delete `datasets.utils.resources` since the moon landing repo now stores its own version of these resources for the metadata UI.
PS: this change requires bumping `huggingface_hub` to 0.13.0 (Transformers requires 0.14.0, so should be ok)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5949/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5948
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5948/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5948/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5948/events
|
https://github.com/huggingface/datasets/pull/5948
| 1,754,794,611 |
PR_kwDODunzps5S4dUt
| 5,948 |
Fix sequence of array support for most dtype
|
{
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-13T12:38:59 | 2023-06-14T15:11:55 | 2023-06-14T15:03:33 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5948",
"html_url": "https://github.com/huggingface/datasets/pull/5948",
"diff_url": "https://github.com/huggingface/datasets/pull/5948.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5948.patch",
"merged_at": "2023-06-14T15:03:33"
}
|
Fixes #5936
Also, a related fix to #5927
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5948/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5947
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5947/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5947/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5947/events
|
https://github.com/huggingface/datasets/issues/5947
| 1,754,359,316 |
I_kwDODunzps5okWYU
| 5,947 |
Return the audio filename when decoding fails due to corrupt files
|
{
"login": "wetdog",
"id": 8949105,
"node_id": "MDQ6VXNlcjg5NDkxMDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8949105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wetdog",
"html_url": "https://github.com/wetdog",
"followers_url": "https://api.github.com/users/wetdog/followers",
"following_url": "https://api.github.com/users/wetdog/following{/other_user}",
"gists_url": "https://api.github.com/users/wetdog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wetdog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wetdog/subscriptions",
"organizations_url": "https://api.github.com/users/wetdog/orgs",
"repos_url": "https://api.github.com/users/wetdog/repos",
"events_url": "https://api.github.com/users/wetdog/events{/privacy}",
"received_events_url": "https://api.github.com/users/wetdog/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-13T08:44:09 | 2023-06-14T12:45:01 | null |
NONE
| null | null | null |
### Feature request
Return the audio filename when the audio decoding fails. Although currently there are some checks for mp3 and opus formats with the library version there are still cases when the audio decoding could fail, eg. Corrupt file.
### Motivation
When you try to load an object file dataset and the decoding fails you can't know which file is corrupt
```
raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
soundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7f5ab7e38290>: Format not recognised.
```
### Your contribution
Make a PR to Add exceptions for LIbsndfileError to return the audio filename or path when soundfile decoding fails.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5947/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5946
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5946/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5946/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5946/events
|
https://github.com/huggingface/datasets/issues/5946
| 1,754,234,469 |
I_kwDODunzps5oj35l
| 5,946 |
IndexError Not Solving -> IndexError: Invalid key: ?? is out of bounds for size 0 or ??
|
{
"login": "syngokhan",
"id": 70565543,
"node_id": "MDQ6VXNlcjcwNTY1NTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/70565543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/syngokhan",
"html_url": "https://github.com/syngokhan",
"followers_url": "https://api.github.com/users/syngokhan/followers",
"following_url": "https://api.github.com/users/syngokhan/following{/other_user}",
"gists_url": "https://api.github.com/users/syngokhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/syngokhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/syngokhan/subscriptions",
"organizations_url": "https://api.github.com/users/syngokhan/orgs",
"repos_url": "https://api.github.com/users/syngokhan/repos",
"events_url": "https://api.github.com/users/syngokhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/syngokhan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-13T07:34:15 | 2023-07-14T12:04:48 | null |
NONE
| null | null | null |
### Describe the bug
in <cell line: 1>:1 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1537 in train │
│ │
│ 1534 │ │ inner_training_loop = find_executable_batch_size( │
│ 1535 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │
│ 1536 │ │ ) │
│ ❱ 1537 │ │ return inner_training_loop( │
│ 1538 │ │ │ args=args, │
│ 1539 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │
│ 1540 │ │ │ trial=trial, │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1789 in _inner_training_loop │
│ │
│ 1786 │ │ │ │ rng_to_sync = True │
│ 1787 │ │ │ │
│ 1788 │ │ │ step = -1 │
│ ❱ 1789 │ │ │ for step, inputs in enumerate(epoch_iterator): │
│ 1790 │ │ │ │ total_batched_samples += 1 │
│ 1791 │ │ │ │ if rng_to_sync: │
│ 1792 │ │ │ │ │ self._load_rng_state(resume_from_checkpoint) │
│ │
│ /usr/local/lib/python3.10/dist-packages/accelerate/data_loader.py:377 in __iter__ │
│ │
│ 374 │ │ dataloader_iter = super().__iter__() │
│ 375 │ │ # We iterate one batch ahead to check when we are at the end │
│ 376 │ │ try: │
│ ❱ 377 │ │ │ current_batch = next(dataloader_iter) │
│ 378 │ │ except StopIteration: │
│ 379 │ │ │ yield │
│ 380 │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:633 in __next__ │
│ │
│ 630 │ │ │ if self._sampler_iter is None: │
│ 631 │ │ │ │ # TODO(https://github.com/pytorch/pytorch/issues/76750) │
│ 632 │ │ │ │ self._reset() # type: ignore[call-arg] │
│ ❱ 633 │ │ │ data = self._next_data() │
│ 634 │ │ │ self._num_yielded += 1 │
│ 635 │ │ │ if self._dataset_kind == _DatasetKind.Iterable and \ │
│ 636 │ │ │ │ │ self._IterableDataset_len_called is not None and \ │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:677 in _next_data │
│ │
│ 674 │ │
│ 675 │ def _next_data(self): │
│ 676 │ │ index = self._next_index() # may raise StopIteration │
│ ❱ 677 │ │ data = self._dataset_fetcher.fetch(index) # may raise StopIteration │
│ 678 │ │ if self._pin_memory: │
│ 679 │ │ │ data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) │
│ 680 │ │ return data │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py:49 in fetch │
│ │
│ 46 │ def fetch(self, possibly_batched_index): │
│ 47 │ │ if self.auto_collation: │
│ 48 │ │ │ if hasattr(self.dataset, "__getitems__") and self.dataset.__getitems__: │
│ ❱ 49 │ │ │ │ data = self.dataset.__getitems__(possibly_batched_index) │
│ 50 │ │ │ else: │
│ 51 │ │ │ │ data = [self.dataset[idx] for idx in possibly_batched_index] │
│ 52 │ │ else: │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2782 in __getitems__ │
│ │
│ 2779 │ │
│ 2780 │ def __getitems__(self, keys: List) -> List: │
│ 2781 │ │ """Can be used to get a batch using a list of integers indices.""" │
│ ❱ 2782 │ │ batch = self.__getitem__(keys) │
│ 2783 │ │ n_examples = len(batch[next(iter(batch))]) │
│ 2784 │ │ return [{col: array[i] for col, array in batch.items()} for i in range(n_example │
│ 2785 │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2778 in __getitem__ │
│ │
│ 2775 │ │
│ 2776 │ def __getitem__(self, key): # noqa: F811 │
│ 2777 │ │ """Can be used to index columns (by string names) or rows (by integer index or i │
│ ❱ 2778 │ │ return self._getitem(key) │
│ 2779 │ │
│ 2780 │ def __getitems__(self, keys: List) -> List: │
│ 2781 │ │ """Can be used to get a batch using a list of integers indices.""" │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2762 in _getitem │
│ │
│ 2759 │ │ format_kwargs = kwargs["format_kwargs"] if "format_kwargs" in kwargs else self._ │
│ 2760 │ │ format_kwargs = format_kwargs if format_kwargs is not None else {} │
│ 2761 │ │ formatter = get_formatter(format_type, features=self._info.features, **format_kw │
│ ❱ 2762 │ │ pa_subtable = query_table(self._data, key, indices=self._indices if self._indice │
│ 2763 │ │ formatted_output = format_table( │
│ 2764 │ │ │ pa_subtable, key, formatter=formatter, format_columns=format_columns, output │
│ 2765 │ │ ) │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:578 in query_table │
│ │
│ 575 │ │ _check_valid_column_key(key, table.column_names) │
│ 576 │ else: │
│ 577 │ │ size = indices.num_rows if indices is not None else table.num_rows │
│ ❱ 578 │ │ _check_valid_index_key(key, size) │
│ 579 │ # Query the main table │
│ 580 │ if indices is None: │
│ 581 │ │ pa_subtable = _query_table(table, key) │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:531 in │
│ _check_valid_index_key │
│ │
│ 528 │ │ │ _check_valid_index_key(min(key), size=size) │
│ 529 │ elif isinstance(key, Iterable): │
│ 530 │ │ if len(key) > 0: │
│ ❱ 531 │ │ │ _check_valid_index_key(int(max(key)), size=size) │
│ 532 │ │ │ _check_valid_index_key(int(min(key)), size=size) │
│ 533 │ else: │
│ 534 │ │ _raise_bad_key_type(key) │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:521 in │
│ _check_valid_index_key │
│ │
│ 518 def _check_valid_index_key(key: Union[int, slice, range, Iterable], size: int) -> None: │
│ 519 │ if isinstance(key, int): │
│ 520 │ │ if (key < 0 and key + size < 0) or (key >= size): │
│ ❱ 521 │ │ │ raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") │
│ 522 │ │ return │
│ 523 │ elif isinstance(key, slice): │
│ 524 │ │ pass
### Steps to reproduce the bug
``
import json
import os
from pprint import pprint
import bitsandbytes as bnb
import pandas as pd
import torch
import torch.nn as nn
import transformers
from datasets import Dataset,load_dataset
from peft import (
LoraConfig,
PeftConfig,
PeftModel,
get_peft_model,
prepare_model_for_kbit_training
)
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
)
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"
)
MODEL_NAME = "tiiuae/falcon-7b"
bnb_config = BitsAndBytesConfig(
load_in_4bit = True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map = "auto",
trust_remote_code = True,
quantization_config = bnb_config
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
tokenizer.pad_token = tokenizer.eos_token
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)
config = LoraConfig(
r = 16,
lora_alpha = 32,
target_modules = ["query_key_value"],
lora_dropout = 0.05,
bias = "none",
task_type = "CASUAL_LM"
)
model = get_peft_model(model,config)
print_trainable_parameters(model)
def generate_prompt(data_point):
return f"""
<human>: {data_point["question"]}
<assistant>: {data_point["answer"]}
""".strip()
def generate_and_tokenize_prompt(data_point):
full_prompt = generate_prompt(data_point)
tokenized_full_prompt = tokenizer(full_prompt, padding = True, truncation = True,return_tensors = None)
return dict({
"input_ids" : tokenized_full_prompt["input_ids"],
"attention_mask" : tokenized_full_prompt["attention_mask"]
})
data = data["train"].shuffle().map(generate_and_tokenize_prompt, batched = False)
OUTPUT_DIR = "experiments"
trainings_args = transformers.TrainingArguments(
per_device_train_batch_size = 1,
gradient_accumulation_steps = 4,
num_train_epochs = 1,
learning_rate = 2e-4,
fp16 = True,
save_total_limit = 3,
logging_steps = 1,
output_dir = OUTPUT_DIR,
max_steps = 80,
optim = "paged_adamw_8bit",
lr_scheduler_type = "cosine",
warmup_ratio = 0.05,
#remove_unused_columns=True
)
trainer = transformers.Trainer(
model = model,
train_dataset = data,
args = trainings_args,
data_collator = transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
model.config.use_cache = False
trainer.train()
IndexError: Invalid key: 32 is out of bounds for size 0
DataSet Format is like :
[{"question": "How can I create an account?", "answer": "To create an account, click on the 'Sign Up' button on the top right corner of our website and follow the instructions to complete the registration process."}, .... ]
### Expected behavior
-
### Environment info
!pip install -q pip
!pip install -q bitsandbytes==0.39.0
!pip install -q torch==2.0.1
!pip install -q git+https://github.com/huggingface/transformers.git
!pip install -q git+https://github.com/huggingface/peft.git
!pip install -q git+https://github.com/huggingface/accelerate.git
!pip install -q datasets
!pip install -q loralib==0.1.1
!pip install -q einops==0.6.1
import json
import os
from pprint import pprint
import bitsandbytes as bnb
import pandas as pd
import torch
import torch.nn as nn
import transformers
from datasets import Dataset,load_dataset
from peft import (
LoraConfig,
PeftConfig,
PeftModel,
get_peft_model,
prepare_model_for_kbit_training
)
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
)
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5946/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5945
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5945/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5945/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5945/events
|
https://github.com/huggingface/datasets/issues/5945
| 1,754,084,577 |
I_kwDODunzps5ojTTh
| 5,945 |
Failing to upload dataset to the hub
|
{
"login": "Ar770",
"id": 77382661,
"node_id": "MDQ6VXNlcjc3MzgyNjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/77382661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ar770",
"html_url": "https://github.com/Ar770",
"followers_url": "https://api.github.com/users/Ar770/followers",
"following_url": "https://api.github.com/users/Ar770/following{/other_user}",
"gists_url": "https://api.github.com/users/Ar770/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ar770/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ar770/subscriptions",
"organizations_url": "https://api.github.com/users/Ar770/orgs",
"repos_url": "https://api.github.com/users/Ar770/repos",
"events_url": "https://api.github.com/users/Ar770/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ar770/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-13T05:46:46 | 2023-07-24T11:56:40 | 2023-07-24T11:56:40 |
NONE
| null | null | null |
### Describe the bug
Trying to upload a dataset of hundreds of thousands of audio samples (the total volume is not very large, 60 gb) to the hub with push_to_hub, it doesn't work.
From time to time one piece of the data (parquet) gets pushed and then I get RemoteDisconnected even though my internet is stable.
Please help.
I'm trying to upload the dataset for almost a week.
Thanks
### Steps to reproduce the bug
not relevant
### Expected behavior
Be able to upload thedataset
### Environment info
python: 3.9
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5945/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5944
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5944/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5944/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5944/events
|
https://github.com/huggingface/datasets/pull/5944
| 1,752,882,200 |
PR_kwDODunzps5Sx7O4
| 5,944 |
Arrow dataset builder to be able to load and stream Arrow datasets
|
{
"login": "mariusz-jachimowicz-83",
"id": 10278877,
"node_id": "MDQ6VXNlcjEwMjc4ODc3",
"avatar_url": "https://avatars.githubusercontent.com/u/10278877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariusz-jachimowicz-83",
"html_url": "https://github.com/mariusz-jachimowicz-83",
"followers_url": "https://api.github.com/users/mariusz-jachimowicz-83/followers",
"following_url": "https://api.github.com/users/mariusz-jachimowicz-83/following{/other_user}",
"gists_url": "https://api.github.com/users/mariusz-jachimowicz-83/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariusz-jachimowicz-83/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariusz-jachimowicz-83/subscriptions",
"organizations_url": "https://api.github.com/users/mariusz-jachimowicz-83/orgs",
"repos_url": "https://api.github.com/users/mariusz-jachimowicz-83/repos",
"events_url": "https://api.github.com/users/mariusz-jachimowicz-83/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariusz-jachimowicz-83/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-12T14:21:49 | 2023-06-13T17:36:02 | 2023-06-13T17:29:01 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5944",
"html_url": "https://github.com/huggingface/datasets/pull/5944",
"diff_url": "https://github.com/huggingface/datasets/pull/5944.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5944.patch",
"merged_at": "2023-06-13T17:29:01"
}
|
This adds a Arrow dataset builder to be able to load and stream from already preprocessed Arrow files.
It's related to https://github.com/huggingface/datasets/issues/3035
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5944/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5942
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5942/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5942/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5942/events
|
https://github.com/huggingface/datasets/pull/5942
| 1,752,021,681 |
PR_kwDODunzps5Su-V4
| 5,942 |
Pass datasets-cli additional args as kwargs to DatasetBuilder in `run_beam.py`
|
{
"login": "graelo",
"id": 84066822,
"node_id": "MDQ6VXNlcjg0MDY2ODIy",
"avatar_url": "https://avatars.githubusercontent.com/u/84066822?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/graelo",
"html_url": "https://github.com/graelo",
"followers_url": "https://api.github.com/users/graelo/followers",
"following_url": "https://api.github.com/users/graelo/following{/other_user}",
"gists_url": "https://api.github.com/users/graelo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/graelo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/graelo/subscriptions",
"organizations_url": "https://api.github.com/users/graelo/orgs",
"repos_url": "https://api.github.com/users/graelo/repos",
"events_url": "https://api.github.com/users/graelo/events{/privacy}",
"received_events_url": "https://api.github.com/users/graelo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-12T06:50:50 | 2023-06-30T09:15:00 | null |
NONE
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5942",
"html_url": "https://github.com/huggingface/datasets/pull/5942",
"diff_url": "https://github.com/huggingface/datasets/pull/5942.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5942.patch",
"merged_at": null
}
|
Hi,
Following this <https://discuss.huggingface.co/t/how-to-preprocess-a-wikipedia-dataset-using-dataflowrunner/41991/3>, here is a simple PR to pass any additional args to datasets-cli as kwargs in the DatasetBuilder in `run_beam.py`.
I also took the liberty to add missing setup steps to the `beam.mdx` docs in order to help everyone.
@lhoestq
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5942/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5941
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5941/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5941/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5941/events
|
https://github.com/huggingface/datasets/issues/5941
| 1,751,838,897 |
I_kwDODunzps5oavCx
| 5,941 |
Load Data Sets Too Slow In Train Seq2seq Model
|
{
"login": "xyx361100238",
"id": 19569322,
"node_id": "MDQ6VXNlcjE5NTY5MzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/19569322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyx361100238",
"html_url": "https://github.com/xyx361100238",
"followers_url": "https://api.github.com/users/xyx361100238/followers",
"following_url": "https://api.github.com/users/xyx361100238/following{/other_user}",
"gists_url": "https://api.github.com/users/xyx361100238/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyx361100238/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyx361100238/subscriptions",
"organizations_url": "https://api.github.com/users/xyx361100238/orgs",
"repos_url": "https://api.github.com/users/xyx361100238/repos",
"events_url": "https://api.github.com/users/xyx361100238/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyx361100238/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-12T03:58:43 | 2023-08-15T02:52:22 | 2023-08-15T02:52:22 |
NONE
| null | null | null |
### Describe the bug
step 'Generating train split' in load_dataset is too slow:

### Steps to reproduce the bug
Data: own data,16K16B Mono wav
Oficial Script:[ run_speech_recognition_seq2seq.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py)
Add Code:
if data_args.data_path is not None:
print(data_args.data_path)
raw_datasets = load_dataset("audiofolder", data_dir=data_args.data_path, cache_dir=model_args.cache_dir)
raw_datasets = raw_datasets.cast_column("audio", Audio(sampling_rate=16000))
raw_datasets = raw_datasets["train"].train_test_split(test_size=0.005, shuffle=True)
(change cache_dir to other path ,ex:/DATA/cache)
### Expected behavior
load data fast,at least 1000+
`Generating train split: 387875 examples [32:24:45, 1154.83 examples/s]`
### Environment info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-5.4.0-149-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.16
- Huggingface_hub version: 0.13.2
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5941/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5990
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5990/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5990/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5990/events
|
https://github.com/huggingface/datasets/issues/5990
| 1,774,389,854 |
I_kwDODunzps5pwwpe
| 5,990 |
Pushing a large dataset on the hub consistently hangs
|
{
"login": "AntreasAntoniou",
"id": 10792502,
"node_id": "MDQ6VXNlcjEwNzkyNTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AntreasAntoniou",
"html_url": "https://github.com/AntreasAntoniou",
"followers_url": "https://api.github.com/users/AntreasAntoniou/followers",
"following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}",
"gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions",
"organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs",
"repos_url": "https://api.github.com/users/AntreasAntoniou/repos",
"events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}",
"received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-10T14:46:47 | 2023-08-17T09:54:11 | null |
NONE
| null | null | null |
### Describe the bug
Once I have locally built a large dataset that I want to push to hub, I use the recommended approach of .push_to_hub to get the dataset on the hub, and after pushing a few shards, it consistently hangs. This has happened over 40 times over the past week, and despite my best efforts to try and catch this happening and kill a process and restart, it seems to be extremely time wasting -- so I came to you to report this and to seek help.
I already tried installing hf_transfer, but it doesn't support Byte file uploads so I uninstalled it.
### Reproduction
```python
import multiprocessing as mp
import pathlib
from math import ceil
import datasets
import numpy as np
from tqdm.auto import tqdm
from tali.data.data import select_subtitles_between_timestamps
from tali.utils import load_json
tali_dataset_dir = "/data/"
if __name__ == "__main__":
full_dataset = datasets.load_dataset(
"Antreas/TALI", num_proc=mp.cpu_count(), cache_dir=tali_dataset_dir
)
def data_generator(set_name, percentage: float = 1.0):
dataset = full_dataset[set_name]
for item in tqdm(dataset):
video_list = item["youtube_content_video"]
video_list = np.random.choice(
video_list, int(ceil(len(video_list) * percentage))
)
if len(video_list) == 0:
continue
captions = item["youtube_subtitle_text"]
captions = select_subtitles_between_timestamps(
subtitle_dict=load_json(
captions.replace(
"/data/",
tali_dataset_dir,
)
),
starting_timestamp=0,
ending_timestamp=100000000,
)
for video_path in video_list:
temp_path = video_path.replace("/data/", tali_dataset_dir)
video_path_actual: pathlib.Path = pathlib.Path(temp_path)
if video_path_actual.exists():
item["youtube_content_video"] = open(video_path_actual, "rb").read()
item["youtube_subtitle_text"] = captions
yield item
train_generator = lambda: data_generator("train", percentage=0.1)
val_generator = lambda: data_generator("val")
test_generator = lambda: data_generator("test")
train_data = datasets.Dataset.from_generator(
train_generator,
num_proc=mp.cpu_count(),
writer_batch_size=5000,
cache_dir=tali_dataset_dir,
)
val_data = datasets.Dataset.from_generator(
val_generator,
writer_batch_size=5000,
num_proc=mp.cpu_count(),
cache_dir=tali_dataset_dir,
)
test_data = datasets.Dataset.from_generator(
test_generator,
writer_batch_size=5000,
num_proc=mp.cpu_count(),
cache_dir=tali_dataset_dir,
)
dataset = datasets.DatasetDict(
{
"train": train_data,
"val": val_data,
"test": test_data,
}
)
succesful_competion = False
while not succesful_competion:
try:
dataset.push_to_hub(repo_id="Antreas/TALI-small", max_shard_size="5GB")
succesful_competion = True
except Exception as e:
print(e)
```
### Logs
```shell
Pushing dataset shards to the dataset hub: 33%|██████████████████████████████████████▎ | 7/21 [24:33<49:06, 210.45s/it]
Error while uploading 'data/val-00007-of-00021-6b216a984af1a4c8.parquet' to the Hub.
Pushing split train to the Hub.
Resuming upload of the dataset shards.
Pushing dataset shards to the dataset hub: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [42:10<00:00, 55.01s/it]
Pushing split val to the Hub.
Resuming upload of the dataset shards.
Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 1.55ba/s]
Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:23<00:00, 23.51s/it]
Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.39ba/s]
Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:30<00:00, 30.19s/it]
Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.28ba/s]
Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:24<00:00, 24.08s/it]
Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.42ba/s]
Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:23<00:00, 23.97s/it]
Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.49ba/s]
Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.54ba/s^
Upload 1 LFS files: 0%| | 0/1 [04:42<?, ?it/s]
Pushing dataset shards to the dataset hub: 52%|████████████████████████████████████████████████████████████▏ | 11/21 [17:23<15:48, 94.82s/it]
That's where it got stuck
```
### System info
```shell
- huggingface_hub version: 0.15.1
- Platform: Linux-5.4.0-147-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /root/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: Antreas
- Configured git credential helpers: store
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.1.0.dev20230606+cu121
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: 9.5.0
- hf_transfer: N/A
- gradio: N/A
- numpy: 1.24.3
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets
- HF_TOKEN_PATH: /root/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5990/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5939
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5939/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5939/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5939/events
|
https://github.com/huggingface/datasets/issues/5939
| 1,749,955,883 |
I_kwDODunzps5oTjUr
| 5,939 |
.
|
{
"login": "flckv",
"id": 103381497,
"node_id": "U_kgDOBil5-Q",
"avatar_url": "https://avatars.githubusercontent.com/u/103381497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flckv",
"html_url": "https://github.com/flckv",
"followers_url": "https://api.github.com/users/flckv/followers",
"following_url": "https://api.github.com/users/flckv/following{/other_user}",
"gists_url": "https://api.github.com/users/flckv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flckv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flckv/subscriptions",
"organizations_url": "https://api.github.com/users/flckv/orgs",
"repos_url": "https://api.github.com/users/flckv/repos",
"events_url": "https://api.github.com/users/flckv/events{/privacy}",
"received_events_url": "https://api.github.com/users/flckv/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-09T14:01:34 | 2023-06-12T12:19:34 | 2023-06-12T12:19:19 |
NONE
| null | null | null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5939/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5938
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5938/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5938/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5938/events
|
https://github.com/huggingface/datasets/pull/5938
| 1,749,462,851 |
PR_kwDODunzps5SmbkI
| 5,938 |
Make get_from_cache use custom temp filename that is locked
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-09T09:01:13 | 2023-06-14T13:35:38 | 2023-06-14T13:27:24 |
MEMBER
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5938",
"html_url": "https://github.com/huggingface/datasets/pull/5938",
"diff_url": "https://github.com/huggingface/datasets/pull/5938.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5938.patch",
"merged_at": "2023-06-14T13:27:24"
}
|
This PR ensures that the temporary filename created is the same as the one that is locked, while writing to the cache.
This PR stops using `tempfile` to generate the temporary filename.
Additionally, the behavior now is aligned for both `resume_download` `True` and `False`.
Refactor temp_file_manager so that it uses the filename that is locked:
- Use: `cache_path + ".incomplete"`, when the locked one is `cache_path + ".lock"`
Before it was using `tempfile` inside `cache_dir`, which was not locked: although very improbable name collision (8 random characters), this was not impossible when huge number of multiple processes.
Maybe related to "Stale file handle" issues caused by `tempfile`:
- [ ] https://huggingface.co/datasets/tapaco/discussions/4
- [ ] https://huggingface.co/datasets/xcsr/discussions/1
- [ ] https://huggingface.co/datasets/covost2/discussions/3
```
Error code: ConfigNamesError
Exception: OSError
Message: [Errno 116] Stale file handle
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 61, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1219, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1188, in dataset_module_factory
return HubDatasetModuleFactoryWithScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 907, in get_module
dataset_readme_path = self.download_dataset_readme_file()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 896, in download_dataset_readme_file
return cached_path(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 183, in cached_path
output_path = get_from_cache(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 611, in get_from_cache
http_get(
File "/usr/local/lib/python3.9/tempfile.py", line 496, in __exit__
result = self.file.__exit__(exc, value, tb)
OSError: [Errno 116] Stale file handle
```
- the stale file handle error can be raised when `tempfile` tries to close (when exiting its context manager) a filename that has been already closed by other process
- note that `tempfile` filenames are randomly generated but not locked in our code
CC: @severo
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5938/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5938/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5937
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5937/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5937/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5937/events
|
https://github.com/huggingface/datasets/pull/5937
| 1,749,388,597 |
PR_kwDODunzps5SmLIs
| 5,937 |
Avoid parallel redownload in cache
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-09T08:18:36 | 2023-06-14T12:30:59 | 2023-06-14T12:23:57 |
MEMBER
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5937",
"html_url": "https://github.com/huggingface/datasets/pull/5937",
"diff_url": "https://github.com/huggingface/datasets/pull/5937.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5937.patch",
"merged_at": "2023-06-14T12:23:57"
}
|
Avoid parallel redownload in cache by retrying inside the lock if path exists.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5937/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5936
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5936/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5936/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5936/events
|
https://github.com/huggingface/datasets/issues/5936
| 1,748,424,388 |
I_kwDODunzps5oNtbE
| 5,936 |
Sequence of array not supported for most dtype
|
{
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-08T18:18:07 | 2023-06-14T15:03:34 | 2023-06-14T15:03:34 |
CONTRIBUTOR
| null | null | null |
### Describe the bug
Create a dataset composed of sequence of array fails for most dtypes (see code below).
### Steps to reproduce the bug
```python
from datasets import Sequence, Array2D, Features, Dataset
import numpy as np
for dtype in [
"bool", # ok
"int8", # failed
"int16", # failed
"int32", # failed
"int64", # ok
"uint8", # failed
"uint16", # failed
"uint32", # failed
"uint64", # failed
"float16", # failed
"float32", # failed
"float64", # ok
]:
features = Features({"foo": Sequence(Array2D(dtype=dtype, shape=(2, 2)))})
sequence = [
[[1.0, 2.0], [3.0, 4.0]],
[[5.0, 6.0], [7.0, 8.0]],
]
array = np.array(sequence, dtype=dtype)
try:
dataset = Dataset.from_dict({"foo": [array]}, features=features)
except Exception as e:
print(f"Failed for dtype={dtype}")
```
Traceback for `dtype="int8"`:
```
Traceback (most recent call last):
File "/home/qgallouedec/datasets/a.py", line 29, in <module>
raise e
File "/home/qgallouedec/datasets/a.py", line 26, in <module>
dataset = Dataset.from_dict({"foo": [array]}, features=features)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 899, in from_dict
pa_table = InMemoryTable.from_pydict(mapping=mapping)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 799, in from_pydict
return cls(pa.Table.from_pydict(*args, **kwargs))
File "pyarrow/table.pxi", line 3725, in pyarrow.lib.Table.from_pydict
File "pyarrow/table.pxi", line 5254, in pyarrow.lib._from_pydict
File "pyarrow/array.pxi", line 350, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 236, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/arrow_writer.py", line 204, in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper
return func(array, *args, **kwargs)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 2091, in cast_array_to_feature
casted_values = _c(array.values, feature.feature)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper
return func(array, *args, **kwargs)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 2139, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper
return func(array, *args, **kwargs)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1967, in array_cast
return pa_type.wrap_array(array)
File "pyarrow/types.pxi", line 879, in pyarrow.lib.BaseExtensionType.wrap_array
TypeError: Incompatible storage type for extension<arrow.py_extension_type<Array2DExtensionType>>: expected list<item: list<item: int8>>, got list<item: list<item: int64>>
```
### Expected behavior
Not to fail.
### Environment info
- Python 3.10.6
- datasets: master branch
- Numpy: 1.23.4
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5936/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5935
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5935/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5935/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5935/events
|
https://github.com/huggingface/datasets/pull/5935
| 1,748,090,220 |
PR_kwDODunzps5Sh9Mg
| 5,935 |
Better row group size in push_to_hub
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-08T15:01:15 | 2023-06-09T17:47:37 | 2023-06-09T17:40:09 |
MEMBER
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5935",
"html_url": "https://github.com/huggingface/datasets/pull/5935",
"diff_url": "https://github.com/huggingface/datasets/pull/5935.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5935.patch",
"merged_at": "2023-06-09T17:40:09"
}
|
This is a very simple change that improves `to_parquet` to use a more reasonable row group size for image and audio datasets.
This is especially useful for `push_to_hub` and will provide a better experience with the dataset viewer on HF
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5935/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5934
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5934/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5934/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5934/events
|
https://github.com/huggingface/datasets/pull/5934
| 1,747,904,840 |
PR_kwDODunzps5ShUxQ
| 5,934 |
Modify levels of some logging messages
|
{
"login": "Laurent2916",
"id": 21087104,
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Laurent2916",
"html_url": "https://github.com/Laurent2916",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-08T13:31:44 | 2023-07-12T18:21:03 | 2023-07-12T18:21:02 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5934",
"html_url": "https://github.com/huggingface/datasets/pull/5934",
"diff_url": "https://github.com/huggingface/datasets/pull/5934.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5934.patch",
"merged_at": null
}
|
Some warning messages didn't quite sound like warnings so I modified their logging levels to info.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5934/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5933
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5933/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5933/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5933/events
|
https://github.com/huggingface/datasets/pull/5933
| 1,747,382,500 |
PR_kwDODunzps5Sfi5J
| 5,933 |
Fix `to_numpy` when None values in the sequence
|
{
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-08T08:38:56 | 2023-06-09T13:49:41 | 2023-06-09T13:23:48 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5933",
"html_url": "https://github.com/huggingface/datasets/pull/5933",
"diff_url": "https://github.com/huggingface/datasets/pull/5933.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5933.patch",
"merged_at": "2023-06-09T13:23:48"
}
|
Closes #5927
I've realized that the error was overlooked during testing due to the presence of only one None value in the sequence.
Unfortunately, it was the only case where the function works as expected. When the sequence contained more than one None value, the function failed. Consequently, I've updated the tests to include sequences with multiple None values.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5933/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5932
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5932/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5932/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5932/events
|
https://github.com/huggingface/datasets/pull/5932
| 1,746,249,161 |
PR_kwDODunzps5Sbrzo
| 5,932 |
[doc build] Use secrets
|
{
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-07T16:09:39 | 2023-06-09T10:16:58 | 2023-06-09T09:53:16 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5932",
"html_url": "https://github.com/huggingface/datasets/pull/5932",
"diff_url": "https://github.com/huggingface/datasets/pull/5932.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5932.patch",
"merged_at": "2023-06-09T09:53:16"
}
|
Companion pr to https://github.com/huggingface/doc-builder/pull/379
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5932/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5932/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5931
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5931/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5931/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5931/events
|
https://github.com/huggingface/datasets/issues/5931
| 1,745,408,784 |
I_kwDODunzps5oCNMQ
| 5,931 |
`datasets.map` not reusing cached copy by default
|
{
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-07T09:03:33 | 2023-06-21T16:15:40 | 2023-06-21T16:15:40 |
CONTRIBUTOR
| null | null | null |
### Describe the bug
When I load the dataset from local directory, it's cached copy is picked up after first time. However, for `map` operation, the operation is applied again and cached copy is not picked up. Is there any way to pick cached copy instead of processing it again? The only solution I could think of was to use `save_to_disk` after my last transform and then use that in my DataLoader pipeline. Are there any other solutions for the same?
One more thing, my dataset is occupying 6GB storage memory after I use `map`, is there any way I can reduce that memory usage?
### Steps to reproduce the bug
```
# make sure that dataset decodes audio with correct sampling rate
dataset_sampling_rate = next(iter(self.raw_datasets.values())).features["audio"].sampling_rate
if dataset_sampling_rate != self.feature_extractor.sampling_rate:
self.raw_datasets = self.raw_datasets.cast_column(
"audio", datasets.features.Audio(sampling_rate=self.feature_extractor.sampling_rate)
)
vectorized_datasets = self.raw_datasets.map(
self.prepare_dataset,
remove_columns=next(iter(self.raw_datasets.values())).column_names,
num_proc=self.num_workers,
desc="preprocess datasets",
)
# filter data that is longer than max_input_length
self.vectorized_datasets = vectorized_datasets.filter(
self.is_audio_in_length_range,
num_proc=self.num_workers,
input_columns=["input_length"],
)
def prepare_dataset(self, batch):
# load audio
sample = batch["audio"]
inputs = self.feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"])
batch["input_values"] = inputs.input_values[0]
batch["input_length"] = len(batch["input_values"])
batch["labels"] = self.tokenizer(batch["target_text"]).input_ids
return batch
```
### Expected behavior
`map` to use cached copy and if possible an alternative technique to reduce memory usage after using `map`
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5931/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5930
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5930/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5930/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5930/events
|
https://github.com/huggingface/datasets/issues/5930
| 1,745,184,395 |
I_kwDODunzps5oBWaL
| 5,930 |
loading private custom dataset script - authentication error
|
{
"login": "flckv",
"id": 103381497,
"node_id": "U_kgDOBil5-Q",
"avatar_url": "https://avatars.githubusercontent.com/u/103381497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flckv",
"html_url": "https://github.com/flckv",
"followers_url": "https://api.github.com/users/flckv/followers",
"following_url": "https://api.github.com/users/flckv/following{/other_user}",
"gists_url": "https://api.github.com/users/flckv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flckv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flckv/subscriptions",
"organizations_url": "https://api.github.com/users/flckv/orgs",
"repos_url": "https://api.github.com/users/flckv/repos",
"events_url": "https://api.github.com/users/flckv/events{/privacy}",
"received_events_url": "https://api.github.com/users/flckv/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-07T06:58:23 | 2023-06-15T14:49:21 | 2023-06-15T14:49:20 |
NONE
| null | null | null |
### Describe the bug
Train model with my custom dataset stored in HuggingFace and loaded with the loading script requires authentication but I am not sure how ?
I am logged in in the terminal, in the browser. I receive this error:
/python3.8/site-packages/datasets/utils/file_utils.py", line 566, in get_from_cache
raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
ConnectionError: Couldn't reach https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels `(ConnectionError('Unauthorized for URL `https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels. Please use the parameter `**`use_auth_token=True`**` after logging in with `**`huggingface-cli login`**`'))
when I added: `use_auth_token=True` and logged in via terminal then I received error:
or the same error in different format:
raise ConnectionError(f"`Couldn't reach {url} (error {response.status_code}`)")
ConnectionError: Couldn't reach https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels (`error 401`)
### Steps to reproduce the bug
1. cloned transformers library locally:
https://huggingface.co/docs/transformers/v4.15.0/examples :
> git clone https://github.com/huggingface/transformers
> cd transformers
> pip install .
> cd /transformers/examples/pytorch/audio-classification
> pip install -r requirements.txt
2. created **loading script**
> https://huggingface.co/docs/datasets/dataset_script added next to dataset:
3. uploaded **private custom dataset** with loading script to HuggingFace
> https://huggingface.co/docs/datasets/dataset_script
4. added dataset loading script to **local directory** in the above cloned transformers library:
> cd /transformers/examples/pytorch/audio-classification
5. logged in to HuggingFace on local terminal with :
> **huggingface-cli login**
6. run the model with the custom dataset stored on HuggingFace with code: https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/README.md
cd /transformers/examples/pytorch/audio-classification
> python run_audio_classification.py \
> --model_name_or_path facebook/wav2vec2-base \
> --output_dir l/users/flck/outputs/wav2vec2-base-s \
> --overwrite_output_dir \
> --dataset_name s \
> --dataset_config_name s \
> --remove_unused_columns False \
> --do_train \
> --do_eval \
> --fp16 \
> --learning_rate 3e-5 \
> --max_length_seconds 1 \
> --attention_mask False \
> --warmup_ratio 0.1 \
> --num_train_epochs 5 \
> --per_device_train_batch_size 32 \
> --gradient_accumulation_steps 4 \
> --per_device_eval_batch_size 32 \
> --dataloader_num_workers 4 \
> --logging_strategy steps \
> --logging_steps 10 \
> --evaluation_strategy epoch \
> --save_strategy epoch \
> --load_best_model_at_end True \
> --metric_for_best_model accuracy \
> --save_total_limit 3 \
> --seed 0 \
> --push_to_hub \
> **--use_auth_token=True**
### Expected behavior
Be able to train a model the https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/ run_audio_classification.py with private custom dataset stored on HuggingFace.
### Environment info
- datasets version: 2.12.0
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.17
- Python version: 3.8.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[conda] numpy 1.24.3 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5930/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5929
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5929/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5929/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5929/events
|
https://github.com/huggingface/datasets/issues/5929
| 1,744,478,456 |
I_kwDODunzps5n-qD4
| 5,929 |
Importing PyTorch reduces multiprocessing performance for map
|
{
"login": "Maxscha",
"id": 12814709,
"node_id": "MDQ6VXNlcjEyODE0NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/12814709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Maxscha",
"html_url": "https://github.com/Maxscha",
"followers_url": "https://api.github.com/users/Maxscha/followers",
"following_url": "https://api.github.com/users/Maxscha/following{/other_user}",
"gists_url": "https://api.github.com/users/Maxscha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Maxscha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Maxscha/subscriptions",
"organizations_url": "https://api.github.com/users/Maxscha/orgs",
"repos_url": "https://api.github.com/users/Maxscha/repos",
"events_url": "https://api.github.com/users/Maxscha/events{/privacy}",
"received_events_url": "https://api.github.com/users/Maxscha/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-06T19:42:25 | 2023-06-16T13:09:12 | 2023-06-16T13:09:12 |
NONE
| null | null | null |
### Describe the bug
I noticed that the performance of my dataset preprocessing with `map(...,num_proc=32)` decreases when PyTorch is imported.
### Steps to reproduce the bug
I created two example scripts to reproduce this behavior:
```
import datasets
datasets.disable_caching()
from datasets import Dataset
import time
PROC=32
if __name__ == "__main__":
dataset = [True] * 10000000
dataset = Dataset.from_dict({'train': dataset})
start = time.time()
dataset.map(lambda x: x, num_proc=PROC)
end = time.time()
print(end - start)
```
Takes around 4 seconds on my machine.
While the same code, but with an `import torch`:
```
import datasets
datasets.disable_caching()
from datasets import Dataset
import time
import torch
PROC=32
if __name__ == "__main__":
dataset = [True] * 10000000
dataset = Dataset.from_dict({'train': dataset})
start = time.time()
dataset.map(lambda x: x, num_proc=PROC)
end = time.time()
print(end - start)
```
takes around 22 seconds.
### Expected behavior
I would expect that the import of torch to not have such a significant effect on the performance of map using multiprocessing.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
- torch: 2.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5929/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5928
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5928/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5928/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5928/events
|
https://github.com/huggingface/datasets/pull/5928
| 1,744,098,371 |
PR_kwDODunzps5SUXPC
| 5,928 |
Fix link to quickstart docs in README.md
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-06T15:23:01 | 2023-06-06T15:52:34 | 2023-06-06T15:43:53 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5928",
"html_url": "https://github.com/huggingface/datasets/pull/5928",
"diff_url": "https://github.com/huggingface/datasets/pull/5928.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5928.patch",
"merged_at": "2023-06-06T15:43:53"
}
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5928/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5927
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5927/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5927/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5927/events
|
https://github.com/huggingface/datasets/issues/5927
| 1,744,009,032 |
I_kwDODunzps5n83dI
| 5,927 |
`IndexError` when indexing `Sequence` of `Array2D` with `None` values
|
{
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-06T14:36:22 | 2023-06-13T12:39:39 | 2023-06-09T13:23:50 |
CONTRIBUTOR
| null | null | null |
### Describe the bug
Having `None` values in a `Sequence` of `ArrayND` fails.
### Steps to reproduce the bug
```python
from datasets import Array2D, Dataset, Features, Sequence
data = [
[
[[0]],
None,
None,
]
]
feature = Sequence(Array2D((1, 1), dtype="int64"))
dataset = Dataset.from_dict({"a": data}, features=Features({"a": feature}))
dataset[0] # error raised only when indexing
```
```
Traceback (most recent call last):
File "/Users/quentingallouedec/gia/c.py", line 13, in <module>
dataset[0] # error raised only when indexing
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2658, in __getitem__
return self._getitem(key)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2643, in _getitem
formatted_output = format_table(
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 634, in format_table
return formatter(pa_table, query_type=query_type)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 406, in __call__
return self.format_row(pa_table)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 441, in format_row
row = self.python_arrow_extractor().extract_row(pa_table)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 144, in extract_row
return _unnest(pa_table.to_pydict())
File "pyarrow/table.pxi", line 4146, in pyarrow.lib.Table.to_pydict
File "pyarrow/table.pxi", line 1312, in pyarrow.lib.ChunkedArray.to_pylist
File "pyarrow/array.pxi", line 1521, in pyarrow.lib.Array.to_pylist
File "pyarrow/scalar.pxi", line 675, in pyarrow.lib.ListScalar.as_py
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/features/features.py", line 760, in to_pylist
return self.to_numpy(zero_copy_only=zero_copy_only).tolist()
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/features/features.py", line 725, in to_numpy
numpy_arr = np.insert(numpy_arr.astype(np.float64), null_indices, np.nan, axis=0)
File "<__array_function__ internals>", line 200, in insert
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/numpy/lib/function_base.py", line 5426, in insert
old_mask[indices] = False
IndexError: index 3 is out of bounds for axis 0 with size 3
```
AFAIK, the problem only occurs when you use a `Sequence` of `ArrayND`.
I strongly suspect that the problem comes from this line, or `np.insert` is misused:
https://github.com/huggingface/datasets/blob/02ee418831aba68d0be93227bce8b3f42ef8980f/src/datasets/features/features.py#L729
To put t simply, you want something that do that:
```python
import numpy as np
numpy_arr = np.zeros((1, 1, 1))
null_indices = np.array([1, 2])
np.insert(numpy_arr, null_indices, np.nan, axis=0)
# raise an error, instead of outputting
# array([[[ 0.]],
# [[nan]],
# [[nan]]])
```
### Expected behavior
The previous code should not raise an error.
### Environment info
- Python 3.10.11
- datasets 2.10.0
- pyarrow 12.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5927/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5926
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5926/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5926/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5926/events
|
https://github.com/huggingface/datasets/issues/5926
| 1,743,922,028 |
I_kwDODunzps5n8iNs
| 5,926 |
Uncaught exception when generating the splits from a dataset that miss data
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"message",
"documentation_url"
] | 2023-06-06T13:51:01 | 2023-06-07T07:53:16 | null |
CONTRIBUTOR
| null | null | null |
### Describe the bug
Dataset https://huggingface.co/datasets/blog_authorship_corpus has an issue with its hosting platform, since https://drive.google.com/u/0/uc?id=1cGy4RNDV87ZHEXbiozABr9gsSrZpPaPz&export=download returns 404 error.
But when trying to generate the split names, we get an exception which is now correctly caught.
Seen originally in https://github.com/huggingface/datasets-server/blob/adbdcd6710ffed4e2eb2e4cd905b5e0dff530a15/services/worker/src/worker/job_runners/config/parquet_and_info.py#L435
### Steps to reproduce the bug
```python
>>> from datasets import StreamingDownloadManager, load_dataset_builder
>>> builder = load_dataset_builder(path="blog_authorship_corpus")
Downloading builder script: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.60k/5.60k [00:00<00:00, 23.1MB/s]
Downloading metadata: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.81k/2.81k [00:00<00:00, 14.7MB/s]
Downloading readme: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.30k/7.30k [00:00<00:00, 30.8MB/s]
>>> dl_manager = StreamingDownloadManager(base_path=builder.base_path)
>>> builder._split_generators(dl_manager)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/blog_authorship_corpus/6f5d78241afd8313111956f877a57db7a0e9fc6718255dc85df0928197feb683/blog_authorship_corpus.py", line 79, in _split_generators
data = dl_manager.download_and_extract(_DATA_URL)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1087, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1039, in extract
urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 435, in map_nested
return function(data_struct)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1044, in _extract
protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 433, in _get_extraction_protocol
with fsspec.open(urlpath, **kwargs) as f:
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 439, in open
return open_files(
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 194, in __getitem__
out = super().__getitem__(item)
IndexError: list index out of range
```
### Expected behavior
We should have an Exception raised by the datasets library.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.19.0-1026-aws-x86_64-with-glibc2.35
- Python version: 3.9.15
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5926/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5925
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5925/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5925/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5925/events
|
https://github.com/huggingface/datasets/issues/5925
| 1,741,941,436 |
I_kwDODunzps5n0-q8
| 5,925 |
Breaking API change in datasets.list_datasets caused by change in HfApi.list_datasets
|
{
"login": "mtkinit",
"id": 78868366,
"node_id": "MDQ6VXNlcjc4ODY4MzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/78868366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mtkinit",
"html_url": "https://github.com/mtkinit",
"followers_url": "https://api.github.com/users/mtkinit/followers",
"following_url": "https://api.github.com/users/mtkinit/following{/other_user}",
"gists_url": "https://api.github.com/users/mtkinit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mtkinit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mtkinit/subscriptions",
"organizations_url": "https://api.github.com/users/mtkinit/orgs",
"repos_url": "https://api.github.com/users/mtkinit/repos",
"events_url": "https://api.github.com/users/mtkinit/events{/privacy}",
"received_events_url": "https://api.github.com/users/mtkinit/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-05T14:46:04 | 2023-06-19T17:22:43 | 2023-06-19T17:22:43 |
NONE
| null | null | null |
### Describe the bug
Hi all,
after an update of the `datasets` library, we observer crashes in our code. We relied on `datasets.list_datasets` returning a `list`. Now, after the API of the HfApi.list_datasets was changed and it returns a `list` instead of an `Iterable`, the `datasets.list_datasets` now sometimes returns a `list` and somesimes an `Iterable`.
It would be helpful to indicate that by the return type of the `datasets.list_datasets` function.
Thanks,
Martin
### Steps to reproduce the bug
Here, the code crashed after we updated the `datasets` library:
```python
# list_datasets no longer returns a list, which leads to an error when one tries to slice it
for datasets.list_datasets(with_details=True)[:limit]:
...
```
### Expected behavior
It would be helpful to indicate that by the return type of the `datasets.list_datasets` function.
### Environment info
Ubuntu 22.04
datasets 2.12.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5925/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5924
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5924/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5924/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5924/events
|
https://github.com/huggingface/datasets/pull/5924
| 1,738,889,236 |
PR_kwDODunzps5SCiFv
| 5,924 |
Add parallel module using joblib for Spark
|
{
"login": "es94129",
"id": 12763339,
"node_id": "MDQ6VXNlcjEyNzYzMzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/12763339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/es94129",
"html_url": "https://github.com/es94129",
"followers_url": "https://api.github.com/users/es94129/followers",
"following_url": "https://api.github.com/users/es94129/following{/other_user}",
"gists_url": "https://api.github.com/users/es94129/gists{/gist_id}",
"starred_url": "https://api.github.com/users/es94129/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/es94129/subscriptions",
"organizations_url": "https://api.github.com/users/es94129/orgs",
"repos_url": "https://api.github.com/users/es94129/repos",
"events_url": "https://api.github.com/users/es94129/events{/privacy}",
"received_events_url": "https://api.github.com/users/es94129/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-02T22:25:25 | 2023-06-14T10:25:10 | 2023-06-14T10:15:46 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5924",
"html_url": "https://github.com/huggingface/datasets/pull/5924",
"diff_url": "https://github.com/huggingface/datasets/pull/5924.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5924.patch",
"merged_at": "2023-06-14T10:15:46"
}
|
Discussion in https://github.com/huggingface/datasets/issues/5798
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5924/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5923
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5923/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5923/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5923/events
|
https://github.com/huggingface/datasets/issues/5923
| 1,737,436,227 |
I_kwDODunzps5njyxD
| 5,923 |
Cannot import datasets - ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility
|
{
"login": "ehuangc",
"id": 71412682,
"node_id": "MDQ6VXNlcjcxNDEyNjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/71412682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ehuangc",
"html_url": "https://github.com/ehuangc",
"followers_url": "https://api.github.com/users/ehuangc/followers",
"following_url": "https://api.github.com/users/ehuangc/following{/other_user}",
"gists_url": "https://api.github.com/users/ehuangc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ehuangc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ehuangc/subscriptions",
"organizations_url": "https://api.github.com/users/ehuangc/orgs",
"repos_url": "https://api.github.com/users/ehuangc/repos",
"events_url": "https://api.github.com/users/ehuangc/events{/privacy}",
"received_events_url": "https://api.github.com/users/ehuangc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-02T04:16:32 | 2023-12-13T15:53:52 | null |
NONE
| null | null | null |
### Describe the bug
When trying to import datasets, I get a pyarrow ValueError:
Traceback (most recent call last):
File "/Users/edward/test/test.py", line 1, in <module>
import datasets
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/__init__.py", line 43, in <module>
from .arrow_dataset import Dataset
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 65, in <module>
from .arrow_reader import ArrowReader
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_reader.py", line 28, in <module>
import pyarrow.parquet as pq
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/__init__.py", line 20, in <module>
from .core import *
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 45, in <module>
from pyarrow.fs import (LocalFileSystem, FileSystem, FileType,
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/fs.py", line 49, in <module>
from pyarrow._gcsfs import GcsFileSystem # noqa
File "pyarrow/_gcsfs.pyx", line 1, in init pyarrow._gcsfs
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
`import datasets`
### Expected behavior
Successful import
### Environment info
Conda environment, MacOS
python 3.9.12
datasets 2.12.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5923/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5923/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5922
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5922/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5922/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5922/events
|
https://github.com/huggingface/datasets/issues/5922
| 1,736,898,953 |
I_kwDODunzps5nhvmJ
| 5,922 |
Length of table does not accurately reflect the split
|
{
"login": "amogkam",
"id": 8068268,
"node_id": "MDQ6VXNlcjgwNjgyNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8068268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amogkam",
"html_url": "https://github.com/amogkam",
"followers_url": "https://api.github.com/users/amogkam/followers",
"following_url": "https://api.github.com/users/amogkam/following{/other_user}",
"gists_url": "https://api.github.com/users/amogkam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amogkam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amogkam/subscriptions",
"organizations_url": "https://api.github.com/users/amogkam/orgs",
"repos_url": "https://api.github.com/users/amogkam/repos",
"events_url": "https://api.github.com/users/amogkam/events{/privacy}",
"received_events_url": "https://api.github.com/users/amogkam/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-01T18:56:26 | 2023-06-02T16:13:31 | 2023-06-02T16:13:31 |
NONE
| null | null | null |
### Describe the bug
I load a Huggingface Dataset and do `train_test_split`. I'm expecting the underlying table for the dataset to also be split, but it's not.
### Steps to reproduce the bug

### Expected behavior
The expected behavior is when `len(hf_dataset["train"].data)` should match the length of the train split, and not be the entire unsplit dataset.
### Environment info
datasets 2.10.1
python 3.10.11
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5922/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5921
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5921/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5921/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5921/events
|
https://github.com/huggingface/datasets/pull/5921
| 1,736,563,023 |
PR_kwDODunzps5R6j-y
| 5,921 |
Fix streaming parquet with image feature in schema
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-01T15:23:10 | 2023-06-02T10:02:54 | 2023-06-02T09:53:11 |
MEMBER
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5921",
"html_url": "https://github.com/huggingface/datasets/pull/5921",
"diff_url": "https://github.com/huggingface/datasets/pull/5921.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5921.patch",
"merged_at": "2023-06-02T09:53:11"
}
|
It was not reading the feature type from the parquet arrow schema
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5921/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5920
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5920/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5920/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5920/events
|
https://github.com/huggingface/datasets/pull/5920
| 1,736,196,991 |
PR_kwDODunzps5R5TRB
| 5,920 |
Optimize IterableDataset.from_file using ArrowExamplesIterable
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-01T12:14:36 | 2023-06-01T12:42:10 | 2023-06-01T12:35:14 |
MEMBER
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5920",
"html_url": "https://github.com/huggingface/datasets/pull/5920",
"diff_url": "https://github.com/huggingface/datasets/pull/5920.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5920.patch",
"merged_at": "2023-06-01T12:35:14"
}
|
following https://github.com/huggingface/datasets/pull/5893
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5920/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5919
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5919/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5919/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5919/events
|
https://github.com/huggingface/datasets/pull/5919
| 1,735,519,227 |
PR_kwDODunzps5R2_EK
| 5,919 |
add support for storage_options for load_dataset API
|
{
"login": "janineguo",
"id": 59083384,
"node_id": "MDQ6VXNlcjU5MDgzMzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/59083384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/janineguo",
"html_url": "https://github.com/janineguo",
"followers_url": "https://api.github.com/users/janineguo/followers",
"following_url": "https://api.github.com/users/janineguo/following{/other_user}",
"gists_url": "https://api.github.com/users/janineguo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/janineguo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/janineguo/subscriptions",
"organizations_url": "https://api.github.com/users/janineguo/orgs",
"repos_url": "https://api.github.com/users/janineguo/repos",
"events_url": "https://api.github.com/users/janineguo/events{/privacy}",
"received_events_url": "https://api.github.com/users/janineguo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-01T05:52:32 | 2023-07-18T06:14:32 | 2023-07-17T17:02:00 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5919",
"html_url": "https://github.com/huggingface/datasets/pull/5919",
"diff_url": "https://github.com/huggingface/datasets/pull/5919.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5919.patch",
"merged_at": null
}
|
to solve the issue in #5880
1. add s3 support in the link check step, previous we only check `http` and `https`,
2. change the parameter of `use_auth_token` to `download_config` to support both `storage_options` and `use_auth_token` parameter when trying to handle(list, open, read, etc,.) the remote files.
3. integrate the check part's duplicate code to make adding or deleting other sources easier.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5919/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5918
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5918/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5918/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5918/events
|
https://github.com/huggingface/datasets/issues/5918
| 1,735,313,549 |
I_kwDODunzps5nbsiN
| 5,918 |
File not found for audio dataset
|
{
"login": "RobertBaruch",
"id": 1783950,
"node_id": "MDQ6VXNlcjE3ODM5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1783950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RobertBaruch",
"html_url": "https://github.com/RobertBaruch",
"followers_url": "https://api.github.com/users/RobertBaruch/followers",
"following_url": "https://api.github.com/users/RobertBaruch/following{/other_user}",
"gists_url": "https://api.github.com/users/RobertBaruch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RobertBaruch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RobertBaruch/subscriptions",
"organizations_url": "https://api.github.com/users/RobertBaruch/orgs",
"repos_url": "https://api.github.com/users/RobertBaruch/repos",
"events_url": "https://api.github.com/users/RobertBaruch/events{/privacy}",
"received_events_url": "https://api.github.com/users/RobertBaruch/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-06-01T02:15:29 | 2023-06-11T06:02:25 | null |
NONE
| null | null | null |
### Describe the bug
After loading an audio dataset, and looking at a sample entry, the `path` element, which is supposed to be the path to the audio file, doesn't actually exist.
### Steps to reproduce the bug
Run bug.py:
```py
import os.path
from datasets import load_dataset
def run() -> None:
cv13 = load_dataset(
"mozilla-foundation/common_voice_13_0",
"hi",
split="train",
)
print(cv13[0])
audio_file = cv13[0]["path"]
if not os.path.exists(audio_file):
raise ValueError(f'File {audio_file} does not exist.')
if __name__ == "__main__":
run()
```
The result (on my machine):
```json
{'client_id': '0f018a99663f33afbb7d38aee281fb1afcfd07f9e7acd00383f604e1e17c38d6ed8adf1bd2ccbf927a52c5adefb8ac4b158ce27a7c2ed9581e71202eb302dfb3', 'path': 'C:\\Users\\rober\\.cache\\huggingface\\datasets\\downloads\\extracted\\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\\common_voice_hi_26008353.mp3', 'audio': {'path': 'C:\\Users\\rober\\.cache\\huggingface\\datasets\\downloads\\extracted\\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\\common_voice_hi_26008353.mp3', 'array': array([ 6.46234854e-26, -1.35709319e-25, -8.07793567e-26, ...,
1.06425944e-07, 4.46417090e-08, 2.61451660e-09]), 'sampling_rate': 48000}, 'sentence': 'हमने उसका जन्मदिन मनाया।', 'up_votes': 2, 'down_votes': 0, 'age': '', 'gender': '', 'accent': '', 'locale': 'hi', 'segment': '' ', 'variant': ''}
```
```txt
Traceback (most recent call last):
File "F:\eo-reco\bug.py", line 18, in <module>
run()
File "F:\eo-reco\bug.py", line 15, in run
raise ValueError(f'File {audio_file} does not exist.')
ValueError: File C:\Users\rober\.cache\huggingface\datasets\downloads\extracted\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\common_voice_hi_26008353.mp3 does not exist.
```
### Expected behavior
The `path` element points to the correct file, which happens to be:
```
C:\Users\rober\.cache\huggingface\datasets\downloads\extracted\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\hi_train_0\common_voice_hi_26008353.mp3
```
That is, there's an extra directory `hi_train_0` that is not in the `path` element.
### Environment info
- `datasets` version: 2.12.0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.11.3
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
-
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5918/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5918/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5917
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5917/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5917/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5917/events
|
https://github.com/huggingface/datasets/pull/5917
| 1,733,661,588 |
PR_kwDODunzps5RwoRU
| 5,917 |
Refactor extensions
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-31T08:33:02 | 2023-05-31T13:34:35 | 2023-05-31T13:25:57 |
MEMBER
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5917",
"html_url": "https://github.com/huggingface/datasets/pull/5917",
"diff_url": "https://github.com/huggingface/datasets/pull/5917.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5917.patch",
"merged_at": "2023-05-31T13:25:57"
}
|
Related to:
- #5850
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5917/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5916
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5916/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5916/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5916/events
|
https://github.com/huggingface/datasets/pull/5916
| 1,732,456,392 |
PR_kwDODunzps5RskTb
| 5,916 |
Unpin responses
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-30T14:59:48 | 2023-05-30T18:03:10 | 2023-05-30T17:53:29 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5916",
"html_url": "https://github.com/huggingface/datasets/pull/5916",
"diff_url": "https://github.com/huggingface/datasets/pull/5916.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5916.patch",
"merged_at": "2023-05-30T17:53:29"
}
|
Fix #5906
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5916/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5915
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5915/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5915/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5915/events
|
https://github.com/huggingface/datasets/pull/5915
| 1,732,389,984 |
PR_kwDODunzps5RsVzj
| 5,915 |
Raise error in `DatasetBuilder.as_dataset` when `file_format` is not `"arrow"`
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-30T14:27:55 | 2023-05-31T13:31:21 | 2023-05-31T13:23:54 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5915",
"html_url": "https://github.com/huggingface/datasets/pull/5915",
"diff_url": "https://github.com/huggingface/datasets/pull/5915.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5915.patch",
"merged_at": "2023-05-31T13:23:54"
}
|
Raise an error in `DatasetBuilder.as_dataset` when `file_format != "arrow"` (and fix the docstring)
Fix #5874
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5915/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5914
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5914/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5914/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5914/events
|
https://github.com/huggingface/datasets/issues/5914
| 1,731,483,996 |
I_kwDODunzps5nNFlc
| 5,914 |
array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size in Datasets
|
{
"login": "ravenouse",
"id": 85110830,
"node_id": "MDQ6VXNlcjg1MTEwODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/85110830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ravenouse",
"html_url": "https://github.com/ravenouse",
"followers_url": "https://api.github.com/users/ravenouse/followers",
"following_url": "https://api.github.com/users/ravenouse/following{/other_user}",
"gists_url": "https://api.github.com/users/ravenouse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ravenouse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ravenouse/subscriptions",
"organizations_url": "https://api.github.com/users/ravenouse/orgs",
"repos_url": "https://api.github.com/users/ravenouse/repos",
"events_url": "https://api.github.com/users/ravenouse/events{/privacy}",
"received_events_url": "https://api.github.com/users/ravenouse/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-30T04:25:00 | 2023-05-30T04:25:00 | null |
NONE
| null | null | null |
### Describe the bug
When using the `filter` or `map` function to preprocess a dataset, a ValueError is encountered with the error message "array is too big; arr.size * arr.dtype.itemsize is larger than the maximum possible size."
Detailed error message:
Traceback (most recent call last):
File "data_processing.py", line 26, in <module>
processed_dataset[split] = samromur_children[split].map(prepare_dataset, cache_file_name=cache_dict[split],writer_batch_size = 50)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2405, in map
desc=desc,
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 524, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/fingerprint.py", line 480, in wrapper
out = func(self, *args, **kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2756, in _map_single
example = apply_function_on_filtered_inputs(example, i, offset=offset)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2347, in decorated
result = f(decorated_item, *args, **kwargs)
File "data_processing.py", line 11, in prepare_dataset
audio = batch["audio"]
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 123, in __getitem__
value = decode_nested_example(self.features[key], value) if value is not None else None
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/features.py", line 1260, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/audio.py", line 156, in decode_example
array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/audio.py", line 257, in _decode_non_mp3_path_like
array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/librosa/core/audio.py", line 176, in load
y, sr_native = __soundfile_load(path, offset, duration, dtype)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/librosa/core/audio.py", line 222, in __soundfile_load
y = sf_desc.read(frames=frame_duration, dtype=dtype, always_2d=False).T
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/soundfile.py", line 891, in read
out = self._create_empty_array(frames, always_2d, dtype)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/soundfile.py", line 1323, in _create_empty_array
return np.empty(shape, dtype, order='C')
ValueError: array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size.
### Steps to reproduce the bug
```python
from datasets import load_dataset, DatasetDict
from transformers import WhisperFeatureExtractor
from transformers import WhisperTokenizer
samromur_children= load_dataset("language-and-voice-lab/samromur_children")
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small")
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="icelandic", task="transcribe")
def prepare_dataset(batch):
# load and resample audio data from 48 to 16kHz
audio = batch["audio"]
# compute log-Mel input features from input audio array
batch["input_features"] = feature_extractor(audio["array"], sampling_rate=16000).input_features[0]
# encode target text to label ids
batch["labels"] = tokenizer(batch["normalized_text"]).input_ids
return batch
cache_dict = {"train": "./cache/audio_train.cache", \
"validation": "./cache/audio_validation.cache", \
"test": "./cache/audio_test.cache"}
filter_cache_dict = {"train": "./cache/filter_train.arrow", \
"validation": "./cache/filter_validation.arrow", \
"test": "./cache/filter_test.arrow"}
print("before filtering")
print(samromur_children)
#filter the dataset to only include examples with more than 2 seconds of audio
samromur_children = samromur_children.filter(lambda example: example["audio"]["array"].shape[0] > 16000*2, cache_file_names=filter_cache_dict)
print("after filtering")
print(samromur_children)
processed_dataset = DatasetDict()
# processed_dataset = samromur_children.map(prepare_dataset, cache_file_names=cache_dict, num_proc=10,)
for split in ["train", "validation", "test"]:
processed_dataset[split] = samromur_children[split].map(prepare_dataset, cache_file_name=cache_dict[split])
```
### Expected behavior
The dataset is successfully processed and ready to train the model.
### Environment info
Python version: 3.7.13
datasets package version: 2.4.0
librosa package version: 0.10.0.post2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5914/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5913
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5913/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5913/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5913/events
|
https://github.com/huggingface/datasets/issues/5913
| 1,731,427,484 |
I_kwDODunzps5nM3yc
| 5,913 |
I tried to load a custom dataset using the following statement: dataset = load_dataset('json', data_files=data_files). The dataset contains 50 million text-image pairs, but an error occurred.
|
{
"login": "cjt222",
"id": 17508662,
"node_id": "MDQ6VXNlcjE3NTA4NjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/17508662?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cjt222",
"html_url": "https://github.com/cjt222",
"followers_url": "https://api.github.com/users/cjt222/followers",
"following_url": "https://api.github.com/users/cjt222/following{/other_user}",
"gists_url": "https://api.github.com/users/cjt222/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cjt222/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cjt222/subscriptions",
"organizations_url": "https://api.github.com/users/cjt222/orgs",
"repos_url": "https://api.github.com/users/cjt222/repos",
"events_url": "https://api.github.com/users/cjt222/events{/privacy}",
"received_events_url": "https://api.github.com/users/cjt222/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-30T02:55:26 | 2023-07-24T12:00:38 | 2023-07-24T12:00:38 |
NONE
| null | null | null |
### Describe the bug
File "/home/kas/.conda/envs/diffusers/lib/python3.7/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
Downloading and preparing dataset json/default to /home/kas/diffusers/examples/dreambooth/cache_data/datasets/json/default-acf423d8c6ef99d0/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data files: 100%|██████████| 1/1 [00:00<00:00, 84.35it/s]
Extracting data files: 0%| | 0/1 [00:00<?, ?it/s] for _, table in generator:
File "/home/kas/.conda/envs/diffusers/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 114, in _generate_tables
io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
File "pyarrow/_json.pyx", line 258, in pyarrow._json.read_json
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 27.72it/s]
Generating train split: 0 examples [00:00, ? examples/s] File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 125, in pyarrow.lib.check_status
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2390448764
### Steps to reproduce the bug
1、data_files = ["1.json", "2.json", "3.json"]
2、dataset = load_dataset('json', data_files=data_files)
### Expected behavior
Read the dataset normally.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-4.15.0-29-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 1.3.5
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5913/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5912
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5912/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5912/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5912/events
|
https://github.com/huggingface/datasets/issues/5912
| 1,730,299,852 |
I_kwDODunzps5nIkfM
| 5,912 |
Missing elements in `map` a batched dataset
|
{
"login": "sachinruk",
"id": 1410927,
"node_id": "MDQ6VXNlcjE0MTA5Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1410927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinruk",
"html_url": "https://github.com/sachinruk",
"followers_url": "https://api.github.com/users/sachinruk/followers",
"following_url": "https://api.github.com/users/sachinruk/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinruk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinruk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinruk/subscriptions",
"organizations_url": "https://api.github.com/users/sachinruk/orgs",
"repos_url": "https://api.github.com/users/sachinruk/repos",
"events_url": "https://api.github.com/users/sachinruk/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinruk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-29T08:09:19 | 2023-07-26T15:48:15 | 2023-07-26T15:48:15 |
NONE
| null | null | null |
### Describe the bug
As outlined [here](https://discuss.huggingface.co/t/length-error-using-map-with-datasets/40969/3?u=sachin), the following collate function drops 5 out of possible 6 elements in the batch (it is 6 because out of the eight, two are bad links in laion). A reproducible [kaggle kernel ](https://www.kaggle.com/sachin/laion-hf-dataset/edit) can be found here.
The weirdest part is when inspecting the sizes of the tensors as shown below, both `tokenized_captions["input_ids"]` and `image_features` show the correct shapes. Simply the output only has one element (with the batch dimension squeezed out).
```python
class CollateFn:
def get_image(self, url):
try:
response = requests.get(url)
return Image.open(io.BytesIO(response.content)).convert("RGB")
except PIL.UnidentifiedImageError:
logger.info(f"Reading error: Could not transform f{url}")
return None
except requests.exceptions.ConnectionError:
logger.info(f"Connection error: Could not transform f{url}")
return None
def __call__(self, batch):
images = [self.get_image(url) for url in batch["url"]]
captions = [caption for caption, image in zip(batch["caption"], images) if image is not None]
images = [image for image in images if image is not None]
tokenized_captions = tokenizer(
captions,
padding="max_length",
truncation=True,
max_length=tokenizer.model_max_length,
return_tensors="pt",
)
image_features = torch.stack([torch.Tensor(feature_extractor(image)["pixel_values"][0]) for image in images])
# import pdb; pdb.set_trace()
return {"input_ids": tokenized_captions["input_ids"], "images": image_features}
collate_fn = CollateFn()
laion_ds = datasets.load_dataset("laion/laion400m", split="train", streaming=True)
laion_ds_batched = laion_ds.map(collate_fn, batched=True, batch_size=8, remove_columns=next(iter(laion_ds)).keys())
```
### Steps to reproduce the bug
A reproducible [kaggle kernel ](https://www.kaggle.com/sachin/laion-hf-dataset/edit) can be found here.
### Expected behavior
Would expect `next(iter(laion_ds_batched))` to produce two tensors of shape `(batch_size, 77)` and `batch_size, image_shape`.
### Environment info
datasets==2.12.0
python==3.10
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5912/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5910
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5910/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5910/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5910/events
|
https://github.com/huggingface/datasets/issues/5910
| 1,728,909,790 |
I_kwDODunzps5nDRHe
| 5,910 |
Cannot use both set_format and set_transform
|
{
"login": "ybouane",
"id": 14046002,
"node_id": "MDQ6VXNlcjE0MDQ2MDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/14046002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ybouane",
"html_url": "https://github.com/ybouane",
"followers_url": "https://api.github.com/users/ybouane/followers",
"following_url": "https://api.github.com/users/ybouane/following{/other_user}",
"gists_url": "https://api.github.com/users/ybouane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ybouane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ybouane/subscriptions",
"organizations_url": "https://api.github.com/users/ybouane/orgs",
"repos_url": "https://api.github.com/users/ybouane/repos",
"events_url": "https://api.github.com/users/ybouane/events{/privacy}",
"received_events_url": "https://api.github.com/users/ybouane/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-27T19:22:23 | 2023-07-09T21:40:54 | 2023-06-16T14:41:24 |
NONE
| null | null | null |
### Describe the bug
I need to process some data using the set_transform method but I also need the data to be formatted for pytorch before processing it.
I don't see anywhere in the documentation something that says that both methods cannot be used at the same time.
### Steps to reproduce the bug
```
from datasets import load_dataset
ds = load_dataset("mnist", split="train")
ds.set_format(type="torch")
def transform(entry):
return entry["image"].double()
ds.set_transform(transform)
print(ds[0])
```
### Expected behavior
It should print the pytorch tensor image as a double, but it errors because "entry" in the transform function doesn't receive a pytorch tensor to begin with, it receives a PIL Image -> entry.double() errors because entry isn't a pytorch tensor.
### Environment info
Latest versions.
### Note:
It would be at least handy to have access to a function that can do the dataset.set_format in the set_transform function.
Something like:
```
from datasets import load_dataset, do_format
ds = load_dataset("mnist", split="train")
def transform(entry):
entry = do_format(entry, type="torch")
return entry["image"].double()
ds.set_transform(transform)
print(ds[0])
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5910/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5910/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5909
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5909/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5909/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5909/events
|
https://github.com/huggingface/datasets/pull/5909
| 1,728,900,068 |
PR_kwDODunzps5Rgga6
| 5,909 |
Use more efficient and idiomatic way to construct list.
|
{
"login": "ttsugriy",
"id": 172294,
"node_id": "MDQ6VXNlcjE3MjI5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/172294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ttsugriy",
"html_url": "https://github.com/ttsugriy",
"followers_url": "https://api.github.com/users/ttsugriy/followers",
"following_url": "https://api.github.com/users/ttsugriy/following{/other_user}",
"gists_url": "https://api.github.com/users/ttsugriy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ttsugriy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ttsugriy/subscriptions",
"organizations_url": "https://api.github.com/users/ttsugriy/orgs",
"repos_url": "https://api.github.com/users/ttsugriy/repos",
"events_url": "https://api.github.com/users/ttsugriy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ttsugriy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-27T18:54:47 | 2023-05-31T15:37:11 | 2023-05-31T13:28:29 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5909",
"html_url": "https://github.com/huggingface/datasets/pull/5909",
"diff_url": "https://github.com/huggingface/datasets/pull/5909.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5909.patch",
"merged_at": "2023-05-31T13:28:28"
}
|
Using `*` is ~2X faster according to [benchmark](https://colab.research.google.com/gist/ttsugriy/c964a2604edf70c41911b10335729b6a/for-vs-mult.ipynb) with just 4 patterns. This doesn't matter much since this tiny difference is not going to be noticeable, but why not?
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5909/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5908
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5908/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5908/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5908/events
|
https://github.com/huggingface/datasets/issues/5908
| 1,728,653,935 |
I_kwDODunzps5nCSpv
| 5,908 |
Unbearably slow sorting on big mapped datasets
|
{
"login": "maximxlss",
"id": 29152154,
"node_id": "MDQ6VXNlcjI5MTUyMTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/29152154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maximxlss",
"html_url": "https://github.com/maximxlss",
"followers_url": "https://api.github.com/users/maximxlss/followers",
"following_url": "https://api.github.com/users/maximxlss/following{/other_user}",
"gists_url": "https://api.github.com/users/maximxlss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maximxlss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maximxlss/subscriptions",
"organizations_url": "https://api.github.com/users/maximxlss/orgs",
"repos_url": "https://api.github.com/users/maximxlss/repos",
"events_url": "https://api.github.com/users/maximxlss/events{/privacy}",
"received_events_url": "https://api.github.com/users/maximxlss/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-27T11:08:32 | 2023-06-13T17:45:10 | null |
CONTRIBUTOR
| null | null | null |
### Describe the bug
For me, with ~40k lines, sorting took 3.5 seconds on a flattened dataset (including the flatten operation) and 22.7 seconds on a mapped dataset (right after sharding), which is about x5 slowdown. Moreover, it seems like it slows down exponentially with bigger datasets (wasn't able to sort 700k lines at all, with flattening takes about a minute).
### Steps to reproduce the bug
```Python
from datasets import load_dataset
import time
dataset = load_dataset("xnli", "en", split="train")
dataset = dataset.shard(10, 0)
print(len(dataset))
t = time.time()
# dataset = dataset.flatten_indices() # uncomment this line and it's fast
dataset = dataset.sort("label", reverse=True, load_from_cache_file=False)
print(f"finished in {time.time() - t:.4f} seconds")
```
### Expected behavior
Expect sorting to take the same or less time than flattening and then sorting.
### Environment info
- `datasets` version: 2.12.1.dev0 (same with 2.12.0 too)
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5908/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5907
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5907/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5907/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5907/events
|
https://github.com/huggingface/datasets/pull/5907
| 1,728,648,560 |
PR_kwDODunzps5RfqUU
| 5,907 |
Add `flatten_indices` to `DatasetDict`
|
{
"login": "maximxlss",
"id": 29152154,
"node_id": "MDQ6VXNlcjI5MTUyMTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/29152154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maximxlss",
"html_url": "https://github.com/maximxlss",
"followers_url": "https://api.github.com/users/maximxlss/followers",
"following_url": "https://api.github.com/users/maximxlss/following{/other_user}",
"gists_url": "https://api.github.com/users/maximxlss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maximxlss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maximxlss/subscriptions",
"organizations_url": "https://api.github.com/users/maximxlss/orgs",
"repos_url": "https://api.github.com/users/maximxlss/repos",
"events_url": "https://api.github.com/users/maximxlss/events{/privacy}",
"received_events_url": "https://api.github.com/users/maximxlss/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-27T10:55:44 | 2023-06-01T11:46:35 | 2023-06-01T11:39:36 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5907",
"html_url": "https://github.com/huggingface/datasets/pull/5907",
"diff_url": "https://github.com/huggingface/datasets/pull/5907.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5907.patch",
"merged_at": "2023-06-01T11:39:35"
}
|
Add `flatten_indices` to `DatasetDict` for convinience
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5907/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5906
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5906/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5906/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5906/events
|
https://github.com/huggingface/datasets/issues/5906
| 1,728,171,113 |
I_kwDODunzps5nAcxp
| 5,906 |
Could you unpin responses version?
|
{
"login": "kenimou",
"id": 47789026,
"node_id": "MDQ6VXNlcjQ3Nzg5MDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/47789026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kenimou",
"html_url": "https://github.com/kenimou",
"followers_url": "https://api.github.com/users/kenimou/followers",
"following_url": "https://api.github.com/users/kenimou/following{/other_user}",
"gists_url": "https://api.github.com/users/kenimou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kenimou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kenimou/subscriptions",
"organizations_url": "https://api.github.com/users/kenimou/orgs",
"repos_url": "https://api.github.com/users/kenimou/repos",
"events_url": "https://api.github.com/users/kenimou/events{/privacy}",
"received_events_url": "https://api.github.com/users/kenimou/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-26T20:02:14 | 2023-05-30T17:53:31 | 2023-05-30T17:53:31 |
NONE
| null | null | null |
### Describe the bug
Could you unpin [this](https://github.com/huggingface/datasets/blob/main/setup.py#L139) or move it to test requirements? This is a testing library and we also use it for our tests as well. We do not want to use a very outdated version.
### Steps to reproduce the bug
could not install this library due to dependency conflict.
### Expected behavior
can install datasets
### Environment info
linux 64
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5906/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5905
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5905/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5905/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5905/events
|
https://github.com/huggingface/datasets/issues/5905
| 1,727,541,392 |
I_kwDODunzps5m-DCQ
| 5,905 |
Offer an alternative to Iterable Dataset that allows lazy loading and processing while skipping batches efficiently
|
{
"login": "Hubert-Bonisseur",
"id": 48770768,
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hubert-Bonisseur",
"html_url": "https://github.com/Hubert-Bonisseur",
"followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers",
"following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}",
"gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions",
"organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs",
"repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos",
"events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-26T12:33:02 | 2023-06-15T13:34:18 | null |
CONTRIBUTOR
| null | null | null |
### Feature request
I would like a way to resume training from a checkpoint without waiting for a very long time when using an iterable dataset.
### Motivation
I am training models on the speech-recognition task. I have very large datasets that I can't comfortably store on a disk and also quite computationally intensive audio processing to do. As a result I want to load data from my remote when it is needed and perform all processing on the fly.
I am currently using the iterable dataset feature of _datasets_. It does everything I need with one exception. My issue is that when resuming training at a step n, we have to download all the data and perform the processing of steps < n, just to get the iterable at the right step. In my case it takes almost as long as training for the same steps, which make resuming training from a checkpoint useless in practice.
I understand that the nature of iterators make it probably nearly impossible to quickly resume training.
I thought about a possible solution nonetheless :
I could in fact index my large dataset and make it a mapped dataset. Then I could use set_transform to perform the processing on the fly. Finally, if I'm not mistaken, the _accelerate_ package allows to [skip steps efficiently](https://github.com/huggingface/accelerate/blob/a73898027a211c3f6dc4460351b0ec246aa824aa/src/accelerate/data_loader.py#L827) for a mapped dataset.
Is it possible to lazily load samples of a mapped dataset ? I'm used to [dataset scripts](https://huggingface.co/docs/datasets/dataset_script), maybe something can be done there.
If not, I could do it using a plain _Pytorch_ dataset. Then I would need to convert it to a _datasets_' dataset to get all the features of _datasets_. Is it something possible ?
### Your contribution
I could provide a PR to allow lazy loading of mapped dataset or the conversion of a mapped _Pytorch_ dataset into a _Datasets_ dataset if you think it is an useful new feature.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5905/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5904
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5904/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5904/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5904/events
|
https://github.com/huggingface/datasets/pull/5904
| 1,727,415,626 |
PR_kwDODunzps5Rbfks
| 5,904 |
Validate name parameter in make_file_instructions
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-26T11:12:46 | 2023-05-31T07:43:32 | 2023-05-31T07:34:57 |
MEMBER
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5904",
"html_url": "https://github.com/huggingface/datasets/pull/5904",
"diff_url": "https://github.com/huggingface/datasets/pull/5904.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5904.patch",
"merged_at": "2023-05-31T07:34:57"
}
|
Validate `name` parameter in `make_file_instructions`.
This way users get more informative error messages, instead of:
```stacktrace
.../huggingface/datasets/src/datasets/arrow_reader.py in make_file_instructions(name, split_infos, instruction, filetype_suffix, prefix_path)
110 name2len = {info.name: info.num_examples for info in split_infos}
111 name2shard_lengths = {info.name: info.shard_lengths for info in split_infos}
--> 112 name2filenames = {
113 info.name: filenames_for_dataset_split(
114 path=prefix_path,
.../huggingface/datasets/src/datasets/arrow_reader.py in <dictcomp>(.0)
111 name2shard_lengths = {info.name: info.shard_lengths for info in split_infos}
112 name2filenames = {
--> 113 info.name: filenames_for_dataset_split(
114 path=prefix_path,
115 dataset_name=name,
.../huggingface/datasets/src/datasets/naming.py in filenames_for_dataset_split(path, dataset_name, split, filetype_suffix, shard_lengths)
68
69 def filenames_for_dataset_split(path, dataset_name, split, filetype_suffix=None, shard_lengths=None):
---> 70 prefix = filename_prefix_for_split(dataset_name, split)
71 prefix = os.path.join(path, prefix)
72
.../huggingface/datasets/src/datasets/naming.py in filename_prefix_for_split(name, split)
52
53 def filename_prefix_for_split(name, split):
---> 54 if os.path.basename(name) != name:
55 raise ValueError(f"Should be a dataset name, not a path: {name}")
56 if not re.match(_split_re, split):
.../lib/python3.9/posixpath.py in basename(p)
140 def basename(p):
141 """Returns the final component of a pathname"""
--> 142 p = os.fspath(p)
143 sep = _get_sep(p)
144 i = p.rfind(sep) + 1
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
Related to #5895.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5904/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5903
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5903/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5903/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5903/events
|
https://github.com/huggingface/datasets/pull/5903
| 1,727,372,549 |
PR_kwDODunzps5RbV82
| 5,903 |
Relax `ci.yml` trigger for `pull_request` based on modified paths
|
{
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-26T10:46:52 | 2023-09-07T15:52:36 | null |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5903",
"html_url": "https://github.com/huggingface/datasets/pull/5903",
"diff_url": "https://github.com/huggingface/datasets/pull/5903.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5903.patch",
"merged_at": null
}
|
## What's in this PR?
As of a previous PR at #5902, I've seen that the CI was automatically trigger on any file, in that case when modifying a Jupyter Notebook (.ipynb), which IMO could be skipped, as the modification on the Jupyter Notebook has no effect/impact on the `ci.yml` outcome. So this PR controls the paths that trigger the `ci.yml` to avoid wasting resources when not needed.
## What's pending in this PR?
I would like to confirm whether this should affect both `push` and `pull_request`, since just modifications in those files won't change the `ci.yml` outcome, so maybe it's worth skipping it too in the `push` trigger.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5903/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5902
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5902/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5902/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5902/events
|
https://github.com/huggingface/datasets/pull/5902
| 1,727,342,194 |
PR_kwDODunzps5RbPS9
| 5,902 |
Fix `Overview.ipynb` & detach Jupyter Notebooks from `datasets` repository
|
{
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-26T10:25:01 | 2023-07-25T13:50:06 | 2023-07-25T13:38:33 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5902",
"html_url": "https://github.com/huggingface/datasets/pull/5902",
"diff_url": "https://github.com/huggingface/datasets/pull/5902.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5902.patch",
"merged_at": "2023-07-25T13:38:33"
}
|
## What's in this PR?
This PR solves #5887 since there was a mismatch between the tokenizer and the model used, since the tokenizer was `bert-base-cased` while the model was `distilbert-base-case` both for the PyTorch and TensorFlow alternatives. Since DistilBERT doesn't use/need the `token_type_ids`, the `**batch` was failing, as the batch contained `input_ids`, `attention_mask`, `token_type_ids`, `start_positions` and `end_positions`, and `token_type_ids` was not required.
Besides that, at the end `seqeval` was being used to evaluate the model predictions, and just `evaluate` was being installed, so I've also included the `seqeval` installation.
Finally, I've re-run everything in Google Colab, and every cell was successfully executed!
## What was done on top of the original PR?
Based on the comments from @mariosasko and @stevhliu, I've updated the contents of this PR to also review the `quickstart.mdx` and update what was needed, besides that, we may eventually move the `Overview.ipynb` dataset to `huggingface/notebooks` following @stevhliu suggestions.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5902/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5902/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5901
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5901/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5901/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5901/events
|
https://github.com/huggingface/datasets/pull/5901
| 1,727,179,016 |
PR_kwDODunzps5Rarux
| 5,901 |
Make prepare_split more robust if errors in metadata dataset_info splits
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-26T08:48:22 | 2023-06-02T06:06:38 | 2023-06-01T13:39:40 |
MEMBER
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5901",
"html_url": "https://github.com/huggingface/datasets/pull/5901",
"diff_url": "https://github.com/huggingface/datasets/pull/5901.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5901.patch",
"merged_at": "2023-06-01T13:39:39"
}
|
This PR uses `split_generator.split_info` as default value for `split_info` if any exception is raised while trying to get `split_generator.name` from `self.info.splits` (this may happen if there is any error in the metadata dataset_info splits).
Please note that `split_info` is only used by the logger.
Fix #5895 if passed `verification_mode="no_checks"`:
```python
ds = load_dataset(
"ArmelR/stack-exchange-instruction",
data_dir="data/finetune",
split="train",
verification_mode="no_checks",
revision="c609f1caade5cfbf3b9fe9cfa17d7cb000b457bd",
)
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5901/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5900
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5900/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5900/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5900/events
|
https://github.com/huggingface/datasets/pull/5900
| 1,727,129,617 |
PR_kwDODunzps5RahTR
| 5,900 |
Fix minor typo in docs loading.mdx
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-26T08:10:54 | 2023-05-26T09:34:15 | 2023-05-26T09:25:12 |
MEMBER
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5900",
"html_url": "https://github.com/huggingface/datasets/pull/5900",
"diff_url": "https://github.com/huggingface/datasets/pull/5900.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5900.patch",
"merged_at": "2023-05-26T09:25:12"
}
|
Minor fix.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5900/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5899
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5899/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5899/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5899/events
|
https://github.com/huggingface/datasets/pull/5899
| 1,726,279,011 |
PR_kwDODunzps5RXods
| 5,899 |
canonicalize data dir in config ID hash
|
{
"login": "kylrth",
"id": 5044802,
"node_id": "MDQ6VXNlcjUwNDQ4MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5044802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylrth",
"html_url": "https://github.com/kylrth",
"followers_url": "https://api.github.com/users/kylrth/followers",
"following_url": "https://api.github.com/users/kylrth/following{/other_user}",
"gists_url": "https://api.github.com/users/kylrth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylrth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylrth/subscriptions",
"organizations_url": "https://api.github.com/users/kylrth/orgs",
"repos_url": "https://api.github.com/users/kylrth/repos",
"events_url": "https://api.github.com/users/kylrth/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylrth/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-25T18:17:10 | 2023-06-02T16:02:15 | 2023-06-02T15:52:04 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5899",
"html_url": "https://github.com/huggingface/datasets/pull/5899",
"diff_url": "https://github.com/huggingface/datasets/pull/5899.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5899.patch",
"merged_at": "2023-06-02T15:52:04"
}
|
fixes #5871
The second commit is optional but improves readability.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5899/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5898
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5898/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5898/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5898/events
|
https://github.com/huggingface/datasets/issues/5898
| 1,726,190,481 |
I_kwDODunzps5m45OR
| 5,898 |
Loading The flores data set for specific language
|
{
"login": "106AbdulBasit",
"id": 36159918,
"node_id": "MDQ6VXNlcjM2MTU5OTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/36159918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/106AbdulBasit",
"html_url": "https://github.com/106AbdulBasit",
"followers_url": "https://api.github.com/users/106AbdulBasit/followers",
"following_url": "https://api.github.com/users/106AbdulBasit/following{/other_user}",
"gists_url": "https://api.github.com/users/106AbdulBasit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/106AbdulBasit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/106AbdulBasit/subscriptions",
"organizations_url": "https://api.github.com/users/106AbdulBasit/orgs",
"repos_url": "https://api.github.com/users/106AbdulBasit/repos",
"events_url": "https://api.github.com/users/106AbdulBasit/events{/privacy}",
"received_events_url": "https://api.github.com/users/106AbdulBasit/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-25T17:08:55 | 2023-05-25T17:21:38 | 2023-05-25T17:21:37 |
NONE
| null | null | null |
### Describe the bug
I am trying to load the Flores data set
the code which is given is
```
from datasets import load_dataset
dataset = load_dataset("facebook/flores")
```
This gives the error of config name
""ValueError: Config name is missing"
Now if I add some config it gives me the some error
"HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'facebook/flores, 'ace_Arab''.
"
How I can load the data of the specific language ?
Couldn't find any tutorial
any one can help me out?
### Steps to reproduce the bug
step one load the data set
`from datasets import load_dataset
dataset = load_dataset("facebook/flores")`
it gives the error of config
once config is given
it gives the error of
"HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'facebook/flores, 'ace_Arab''.
"
### Expected behavior
Data set should be loaded but I am receiving error
### Environment info
Datasets , python ,
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5898/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5897
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5897/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5897/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5897/events
|
https://github.com/huggingface/datasets/pull/5897
| 1,726,135,494 |
PR_kwDODunzps5RXJaY
| 5,897 |
Fix `FixedSizeListArray` casting
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-25T16:26:33 | 2023-05-26T12:22:04 | 2023-05-26T11:57:16 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5897",
"html_url": "https://github.com/huggingface/datasets/pull/5897",
"diff_url": "https://github.com/huggingface/datasets/pull/5897.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5897.patch",
"merged_at": "2023-05-26T11:57:16"
}
|
Fix cast on sliced `FixedSizeListArray`s.
Fix #5866
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5897/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5896
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5896/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5896/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5896/events
|
https://github.com/huggingface/datasets/issues/5896
| 1,726,022,500 |
I_kwDODunzps5m4QNk
| 5,896 |
HuggingFace does not cache downloaded files aggressively/early enough
|
{
"login": "geajack",
"id": 2124157,
"node_id": "MDQ6VXNlcjIxMjQxNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2124157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/geajack",
"html_url": "https://github.com/geajack",
"followers_url": "https://api.github.com/users/geajack/followers",
"following_url": "https://api.github.com/users/geajack/following{/other_user}",
"gists_url": "https://api.github.com/users/geajack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/geajack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/geajack/subscriptions",
"organizations_url": "https://api.github.com/users/geajack/orgs",
"repos_url": "https://api.github.com/users/geajack/repos",
"events_url": "https://api.github.com/users/geajack/events{/privacy}",
"received_events_url": "https://api.github.com/users/geajack/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-25T15:14:36 | 2023-05-25T15:14:36 | null |
NONE
| null | null | null |
### Describe the bug
I wrote the following script:
```
import datasets
dataset = datasets.load.load_dataset("wikipedia", "20220301.en", split="train[:10000]")
```
I ran it and spent 90 minutes downloading a 20GB file. Then I saw:
```
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20.3G/20.3G [1:30:29<00:00, 3.73MB/s]
Traceback (most recent call last):
File "/home/jack/Code/Projects/Transformers/Codebase/main.py", line 5, in <module>
dataset = datasets.load.load_dataset("wikipedia", "20220301.en", split="train[:10000]")
File "/home/jack/.local/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset
builder_instance.download_and_prepare(
File "/home/jack/.local/lib/python3.10/site-packages/datasets/builder.py", line 883, in download_and_prepare
self._save_info()
File "/home/jack/.local/lib/python3.10/site-packages/datasets/builder.py", line 2037, in _save_info
import apache_beam as beam
ModuleNotFoundError: No module named 'apache_beam'
```
And the 20GB of data was seemingly instantly gone forever, because when I ran the script again, it had to do the download again.
### Steps to reproduce the bug
See above
### Expected behavior
See above
### Environment info
datasets 2.10.1
Python 3.10
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5896/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5895
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5895/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5895/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5895/events
|
https://github.com/huggingface/datasets/issues/5895
| 1,725,467,252 |
I_kwDODunzps5m2Ip0
| 5,895 |
The dir name and split strings are confused when loading ArmelR/stack-exchange-instruction dataset
|
{
"login": "DongHande",
"id": 45357817,
"node_id": "MDQ6VXNlcjQ1MzU3ODE3",
"avatar_url": "https://avatars.githubusercontent.com/u/45357817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DongHande",
"html_url": "https://github.com/DongHande",
"followers_url": "https://api.github.com/users/DongHande/followers",
"following_url": "https://api.github.com/users/DongHande/following{/other_user}",
"gists_url": "https://api.github.com/users/DongHande/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DongHande/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DongHande/subscriptions",
"organizations_url": "https://api.github.com/users/DongHande/orgs",
"repos_url": "https://api.github.com/users/DongHande/repos",
"events_url": "https://api.github.com/users/DongHande/events{/privacy}",
"received_events_url": "https://api.github.com/users/DongHande/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-25T09:39:06 | 2023-05-29T02:32:12 | 2023-05-29T02:32:12 |
NONE
| null | null | null |
### Describe the bug
When I load the ArmelR/stack-exchange-instruction dataset, I encounter a bug that may be raised by confusing the dir name string and the split string about the dataset.
When I use the script "datasets.load_dataset('ArmelR/stack-exchange-instruction', data_dir="data/finetune", split="train", use_auth_token=True)", it fails. But it succeeds when I add the "streaming = True" parameter.
The website of the dataset is https://huggingface.co/datasets/ArmelR/stack-exchange-instruction/ .
The traceback logs are as below:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 985, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 1706, in _prepare_split
split_info = self.info.splits[split_generator.name]
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/splits.py", line 530, in __getitem__
instructions = make_file_instructions(
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/arrow_reader.py", line 112, in make_file_instructions
name2filenames = {
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/arrow_reader.py", line 113, in <dictcomp>
info.name: filenames_for_dataset_split(
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/naming.py", line 70, in filenames_for_dataset_split
prefix = filename_prefix_for_split(dataset_name, split)
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/naming.py", line 54, in filename_prefix_for_split
if os.path.basename(name) != name:
File "/home/xxx/miniconda3/envs/code/lib/python3.9/posixpath.py", line 142, in basename
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType
### Steps to reproduce the bug
1. import datasets library function: ```from datasets import load_dataset```
2. load dataset: ```ds=load_dataset('ArmelR/stack-exchange-instruction', data_dir="data/finetune", split="train", use_auth_token=True)```
### Expected behavior
The dataset can be loaded successfully without the streaming setting.
### Environment info
Linux,
python=3.9
datasets=2.12.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5895/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5894
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5894/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5894/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5894/events
|
https://github.com/huggingface/datasets/pull/5894
| 1,724,774,910 |
PR_kwDODunzps5RSjot
| 5,894 |
Force overwrite existing filesystem protocol
|
{
"login": "baskrahmer",
"id": 24520725,
"node_id": "MDQ6VXNlcjI0NTIwNzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/24520725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baskrahmer",
"html_url": "https://github.com/baskrahmer",
"followers_url": "https://api.github.com/users/baskrahmer/followers",
"following_url": "https://api.github.com/users/baskrahmer/following{/other_user}",
"gists_url": "https://api.github.com/users/baskrahmer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baskrahmer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baskrahmer/subscriptions",
"organizations_url": "https://api.github.com/users/baskrahmer/orgs",
"repos_url": "https://api.github.com/users/baskrahmer/repos",
"events_url": "https://api.github.com/users/baskrahmer/events{/privacy}",
"received_events_url": "https://api.github.com/users/baskrahmer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-24T21:41:53 | 2023-05-25T06:52:08 | 2023-05-25T06:42:33 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5894",
"html_url": "https://github.com/huggingface/datasets/pull/5894",
"diff_url": "https://github.com/huggingface/datasets/pull/5894.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5894.patch",
"merged_at": "2023-05-25T06:42:33"
}
|
Fix #5876
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5894/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5893
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5893/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5893/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5893/events
|
https://github.com/huggingface/datasets/pull/5893
| 1,722,519,056 |
PR_kwDODunzps5RK40K
| 5,893 |
Load cached dataset as iterable
|
{
"login": "mariusz-jachimowicz-83",
"id": 10278877,
"node_id": "MDQ6VXNlcjEwMjc4ODc3",
"avatar_url": "https://avatars.githubusercontent.com/u/10278877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariusz-jachimowicz-83",
"html_url": "https://github.com/mariusz-jachimowicz-83",
"followers_url": "https://api.github.com/users/mariusz-jachimowicz-83/followers",
"following_url": "https://api.github.com/users/mariusz-jachimowicz-83/following{/other_user}",
"gists_url": "https://api.github.com/users/mariusz-jachimowicz-83/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariusz-jachimowicz-83/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariusz-jachimowicz-83/subscriptions",
"organizations_url": "https://api.github.com/users/mariusz-jachimowicz-83/orgs",
"repos_url": "https://api.github.com/users/mariusz-jachimowicz-83/repos",
"events_url": "https://api.github.com/users/mariusz-jachimowicz-83/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariusz-jachimowicz-83/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-23T17:40:35 | 2023-06-01T11:58:24 | 2023-06-01T11:51:29 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5893",
"html_url": "https://github.com/huggingface/datasets/pull/5893",
"diff_url": "https://github.com/huggingface/datasets/pull/5893.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5893.patch",
"merged_at": "2023-06-01T11:51:29"
}
|
To be used to train models it allows to load an IterableDataset from the cached Arrow file.
See https://github.com/huggingface/datasets/issues/5481
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5893/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5892
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5892/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5892/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5892/events
|
https://github.com/huggingface/datasets/issues/5892
| 1,722,503,824 |
I_kwDODunzps5mq1KQ
| 5,892 |
User access requests with manual review do not notify the dataset owner
|
{
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-23T17:27:46 | 2023-07-21T13:55:37 | 2023-07-21T13:55:36 |
CONTRIBUTOR
| null | null | null |
### Describe the bug
When a user access requests are enabled, and new requests are set to Manual Review, the dataset owner should be notified of the pending requests. However, instead, currently nothing happens, and so the dataset request can go unanswered for quite some time until the owner happens to check that particular dataset's Settings pane.
### Steps to reproduce the bug
1. Enable a dataset's user access requests
2. Set to Manual Review
3. Ask another HF user to request access to the dataset
4. Dataset owner is not notified
### Expected behavior
The dataset owner should receive some kind of notification, perhaps in their HF site inbox, or by email, when a dataset access request is made and manual review is enabled.
### Environment info
n/a
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5892/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5891
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5891/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5891/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5891/events
|
https://github.com/huggingface/datasets/pull/5891
| 1,722,384,135 |
PR_kwDODunzps5RKchn
| 5,891 |
Make split slicing consisten with list slicing
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-23T16:04:33 | 2023-05-23T16:11:12 | null |
CONTRIBUTOR
| null | true |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5891",
"html_url": "https://github.com/huggingface/datasets/pull/5891",
"diff_url": "https://github.com/huggingface/datasets/pull/5891.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5891.patch",
"merged_at": null
}
|
Fix #1774, fix #5875
TODO: a test
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5891/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5889
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5889/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5889/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5889/events
|
https://github.com/huggingface/datasets/issues/5889
| 1,722,373,618 |
I_kwDODunzps5mqVXy
| 5,889 |
Token Alignment for input and output data over train and test batch/dataset.
|
{
"login": "akesh1235",
"id": 125154243,
"node_id": "U_kgDOB3Wzww",
"avatar_url": "https://avatars.githubusercontent.com/u/125154243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akesh1235",
"html_url": "https://github.com/akesh1235",
"followers_url": "https://api.github.com/users/akesh1235/followers",
"following_url": "https://api.github.com/users/akesh1235/following{/other_user}",
"gists_url": "https://api.github.com/users/akesh1235/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akesh1235/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akesh1235/subscriptions",
"organizations_url": "https://api.github.com/users/akesh1235/orgs",
"repos_url": "https://api.github.com/users/akesh1235/repos",
"events_url": "https://api.github.com/users/akesh1235/events{/privacy}",
"received_events_url": "https://api.github.com/users/akesh1235/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-23T15:58:55 | 2023-05-23T15:58:55 | null |
NONE
| null | null | null |
`data`
> DatasetDict({
train: Dataset({
features: ['input', 'output'],
num_rows: 4500
})
test: Dataset({
features: ['input', 'output'],
num_rows: 500
})
})
**# input (in-correct sentence)**
`data['train'][0]['input']`
**>>** 'We are meet sunday 10am12pmET in Crown Heights Brooklyn New York'
**# output (correct sentence)**
`data['train'][0]['output']`
**>>** 'We meet Sundays 10am-12pmET in Crown Heights, Brooklyn, New York.'
**I Want to align the output tokens with input**
```
`# tokenize both inputs and targets
def tokenize_fn(batch):
# tokenize the input sequence first
# this populates input_ids, attention_mask, etc.
tokenized_inputs = tokenizer(
batch['input']
)
labels_batch = tokenizer.tokenize(batch['output']) # original targets
aligned_labels_batch = []
for i, labels in enumerate(labels_batch):
word_ids = tokenized_inputs[i].word_ids()
aligned_labels_batch.append(align_targets(labels, word_ids)) # align_targets is another user defined function which is been called here
# recall: the 'target' must be stored in key called 'labels'
tokenized_inputs['labels'] = aligned_labels_batch
return tokenized_inputs`
```
```
data.map(
tokenize_fn,
batched=True,
remove_columns=data['train'].column_names,
)
```
When this user defined function is mapped to every records of train and test batch am getting following error:
**1.** **raise DatasetTransformationNotAllowedError(
3457 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it."**
**2.** **TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]**
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5889/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5887
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5887/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5887/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5887/events
|
https://github.com/huggingface/datasets/issues/5887
| 1,722,166,382 |
I_kwDODunzps5mpixu
| 5,887 |
HuggingsFace dataset example give error
|
{
"login": "donhuvy",
"id": 1328316,
"node_id": "MDQ6VXNlcjEzMjgzMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1328316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donhuvy",
"html_url": "https://github.com/donhuvy",
"followers_url": "https://api.github.com/users/donhuvy/followers",
"following_url": "https://api.github.com/users/donhuvy/following{/other_user}",
"gists_url": "https://api.github.com/users/donhuvy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donhuvy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donhuvy/subscriptions",
"organizations_url": "https://api.github.com/users/donhuvy/orgs",
"repos_url": "https://api.github.com/users/donhuvy/repos",
"events_url": "https://api.github.com/users/donhuvy/events{/privacy}",
"received_events_url": "https://api.github.com/users/donhuvy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
}
] |
[
"message",
"documentation_url"
] | 2023-05-23T14:09:05 | 2023-07-25T14:01:01 | 2023-07-25T14:01:00 |
NONE
| null | null | null |
### Describe the bug


### Steps to reproduce the bug
Use link as reference document written https://colab.research.google.com/github/huggingface/datasets/blob/main/notebooks/Overview.ipynb#scrollTo=biqDH9vpvSVz
```python
# Now let's train our model
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.train().to(device)
for i, batch in enumerate(dataloader):
batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
model.zero_grad()
print(f'Step {i} - loss: {loss:.3}')
if i > 5:
break
```
Error
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-44-7040b885f382>](https://localhost:8080/#) in <cell line: 5>()
5 for i, batch in enumerate(dataloader):
6 batch.to(device)
----> 7 outputs = model(**batch)
8 loss = outputs.loss
9 loss.backward()
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: DistilBertForQuestionAnswering.forward() got an unexpected keyword argument 'token_type_ids'
```
https://github.com/huggingface/datasets/assets/1328316/5d8b1d61-9337-4d59-8423-4f37f834c156
### Expected behavior
Run success on Google Colab (free)
### Environment info
Windows 11 x64, Google Colab free (my Google Drive just empty about 200 MB, but I don't think it cause problem)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5887/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5886
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5886/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5886/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5886/events
|
https://github.com/huggingface/datasets/issues/5886
| 1,721,070,225 |
I_kwDODunzps5mlXKR
| 5,886 |
Use work-stealing algorithm when parallel computing
|
{
"login": "1014661165",
"id": 46060451,
"node_id": "MDQ6VXNlcjQ2MDYwNDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/46060451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1014661165",
"html_url": "https://github.com/1014661165",
"followers_url": "https://api.github.com/users/1014661165/followers",
"following_url": "https://api.github.com/users/1014661165/following{/other_user}",
"gists_url": "https://api.github.com/users/1014661165/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1014661165/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1014661165/subscriptions",
"organizations_url": "https://api.github.com/users/1014661165/orgs",
"repos_url": "https://api.github.com/users/1014661165/repos",
"events_url": "https://api.github.com/users/1014661165/events{/privacy}",
"received_events_url": "https://api.github.com/users/1014661165/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-23T03:08:44 | 2023-05-24T15:30:09 | null |
NONE
| null | null | null |
### Feature request
when i used Dataset.map api to process data concurrently, i found that
it gets slower and slower as it gets closer to completion. Then i read the source code of arrow_dataset.py and found that it shard the dataset and use multiprocessing pool to execute each shard.It may cause the slowest task to drag out the entire program's execution time,especially when processing huge dataset.
### Motivation
using work-stealing algorithm instead of sharding and parallel computing to optimize performance.
### Your contribution
just an idea.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5886/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5885
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5885/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5885/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5885/events
|
https://github.com/huggingface/datasets/pull/5885
| 1,720,954,440 |
PR_kwDODunzps5RFjTL
| 5,885 |
Modify `is_remote_filesystem` to return True for FUSE-mounted paths
|
{
"login": "maddiedawson",
"id": 106995444,
"node_id": "U_kgDOBmCe9A",
"avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maddiedawson",
"html_url": "https://github.com/maddiedawson",
"followers_url": "https://api.github.com/users/maddiedawson/followers",
"following_url": "https://api.github.com/users/maddiedawson/following{/other_user}",
"gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions",
"organizations_url": "https://api.github.com/users/maddiedawson/orgs",
"repos_url": "https://api.github.com/users/maddiedawson/repos",
"events_url": "https://api.github.com/users/maddiedawson/events{/privacy}",
"received_events_url": "https://api.github.com/users/maddiedawson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-23T01:04:54 | 2023-05-25T08:50:48 | null |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5885",
"html_url": "https://github.com/huggingface/datasets/pull/5885",
"diff_url": "https://github.com/huggingface/datasets/pull/5885.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5885.patch",
"merged_at": null
}
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5885/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5888
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5888/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5888/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5888/events
|
https://github.com/huggingface/datasets/issues/5888
| 1,722,290,363 |
I_kwDODunzps5mqBC7
| 5,888 |
A way to upload and visualize .mp4 files (millions of them) as part of a dataset
|
{
"login": "AntreasAntoniou",
"id": 10792502,
"node_id": "MDQ6VXNlcjEwNzkyNTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AntreasAntoniou",
"html_url": "https://github.com/AntreasAntoniou",
"followers_url": "https://api.github.com/users/AntreasAntoniou/followers",
"following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}",
"gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions",
"organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs",
"repos_url": "https://api.github.com/users/AntreasAntoniou/repos",
"events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}",
"received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-22T18:05:26 | 2023-06-23T03:37:16 | null |
NONE
| null | null | null |
**Is your feature request related to a problem? Please describe.**
I recently chose to use huggingface hub as the home for a large multi modal dataset I've been building. https://huggingface.co/datasets/Antreas/TALI
It combines images, text, audio and video. Now, I could very easily upload a dataset made via datasets.Dataset.from_generator, as long as it did not include video files. I found that including .mp4 files in the entries would not auto-upload those files.
Hence I tried to upload them myself. I quickly found out that uploading many small files is a very bad way to use git lfs, and that it would take ages, so, I resorted to using 7z to pack them all up. But then I had a new problem.
My dataset had a size of 1.9TB. Trying to upload such a large file with the default huggingface_hub API always resulted in time outs etc. So I decided to split the large files into chunks of 5GB each and reupload.
So, eventually it all worked out. But now the dataset can't be properly and natively used by the datasets API because of all the needed preprocessing -- and furthermore the hub is unable to visualize things.
**Describe the solution you'd like**
A native way to upload large datasets that include .mp4 or other video types.
**Describe alternatives you've considered**
Already explained earlier
**Additional context**
https://huggingface.co/datasets/Antreas/TALI
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5888/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5884
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5884/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5884/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5884/events
|
https://github.com/huggingface/datasets/issues/5884
| 1,719,548,172 |
I_kwDODunzps5mfjkM
| 5,884 |
`Dataset.to_tf_dataset` fails when strings cannot be encoded as `np.bytes_`
|
{
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
}
] |
[
"message",
"documentation_url"
] | 2023-05-22T12:03:06 | 2023-06-09T16:04:56 | 2023-06-09T16:04:55 |
CONTRIBUTOR
| null | null | null |
### Describe the bug
When loading any dataset that contains a column with strings that are not ASCII-compatible, looping over those records raises the following exception e.g. for `é` character `UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 0: ordinal not in range(128)`.
### Steps to reproduce the bug
Running the following script will eventually fail, when reaching to the batch that contains non-ASCII compatible strings.
```python
from datasets import load_dataset
ds = load_dataset("imdb", split="train")
tfds = ds.to_tf_dataset(batch_size=16)
for batch in tfds:
print(batch)
>>> UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 0: ordinal not in range(128)
```
### Expected behavior
The following script to run properly, making sure that the strings are either `numpy.unicode_` or `numpy.string` instead of `numpy.bytes_` since some characters are not ASCII compatible and that would lead to an issue when applying the `map`.
```python
from datasets import load_dataset
ds = load_dataset("imdb", split="train")
tfds = ds.to_tf_dataset(batch_size=16)
for batch in tfds:
print(batch)
```
### Environment info
- `datasets` version: 2.12.1.dev0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5884/timeline
| null |
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/5883
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5883/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5883/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5883/events
|
https://github.com/huggingface/datasets/pull/5883
| 1,719,527,597 |
PR_kwDODunzps5RAkYi
| 5,883 |
Fix string-encoding, make `batch_size` optional, and minor improvements in `Dataset.to_tf_dataset`
|
{
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-22T11:51:07 | 2023-06-08T11:09:03 | 2023-06-06T16:49:15 |
CONTRIBUTOR
| null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5883",
"html_url": "https://github.com/huggingface/datasets/pull/5883",
"diff_url": "https://github.com/huggingface/datasets/pull/5883.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5883.patch",
"merged_at": "2023-06-06T16:49:15"
}
|
## What's in this PR?
This PR addresses some minor fixes and general improvements in the `to_tf_dataset` method of `datasets.Dataset`, to convert a 🤗HuggingFace Dataset as a TensorFlow Dataset.
The main bug solved in this PR comes with the string-encoding, since for safety purposes the internal conversion of `numpy.arrays` when `dtype` is unicode/string, is to convert it into `numpy.bytes`, more information in the docstring of https://github.com/tensorflow/tensorflow/blob/388d952114e59a1aeda440ed4737b29f8b7c6e8a/tensorflow/python/ops/script_ops.py#L210. That's triggered when using `tensorflow.numpy_function` as it's applying another type cast besides the one that `datasets` does, so the casting is applied at least twice per entry/batch. So this means that the definition of the `numpy.unicode_` dtype when the data in the batch is a string, is ignored, and replaced by `numpy.bytes_`.
Besides that, some other minor things have been fixed:
* Made `batch_size` an optional parameter in `to_tf_dataset`
* Map the `tensorflow` output dtypes just once, and not in every `tf.function` call during `map`
* Keep `numpy` formatting in the `datasets.Dataset` if already formatted like it, no need to format it again as `numpy`
* Docstring indentation in `dataset_to_tf` and `multiprocess_dataset_to_tf`
## What's missing in this PR?
I can include some integration tests if needed, to validate that `batch_size` is optional, and that the tensors in the TF-Dataset can be looped over with no issues as before.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5883/timeline
| null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5881
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5881/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5881/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5881/events
|
https://github.com/huggingface/datasets/issues/5881
| 1,719,402,643 |
I_kwDODunzps5mfACT
| 5,881 |
Split dataset by node: index error when sharding iterable dataset
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-22T10:36:13 | 2023-05-23T08:32:14 | null |
CONTRIBUTOR
| null | null | null |
### Describe the bug
Context: we're splitting an iterable dataset by node and then passing it to a torch data loader with multiple workers
When we iterate over it for 5 steps, we don't get an error
When we instead iterate over it for 8 steps, we get an `IndexError` when fetching the data if we have too many workers
### Steps to reproduce the bug
Here, we have 2 JAX processes (`jax.process_count() = 2`) which we split the dataset over. The dataset loading script can be found here: https://huggingface.co/datasets/distil-whisper/librispeech_asr/blob/c6a1e805cbfeed5057400ac5937327d7e30281b8/librispeech_asr.py#L310
<details>
<summary> Code to reproduce </summary>
```python
from datasets import load_dataset
import jax
from datasets.distributed import split_dataset_by_node
from torch.utils.data import DataLoader
from tqdm import tqdm
# load an example dataset (https://huggingface.co/datasets/distil-whisper/librispeech_asr)
dataset = load_dataset("distil-whisper/librispeech_asr", "all", split="train.clean.100", streaming=True)
# just keep the text column -> no need to define a collator
dataset_text = dataset.remove_columns(set(dataset.features.keys()) - {"text"})
# define some constants
batch_size = 256
num_examples = 5 # works for 5 examples, doesn't for 8
num_workers = dataset_text.n_shards
# try with multiple workers
dataloader = DataLoader(dataset_text, batch_size=batch_size, num_workers=num_workers, drop_last=True)
for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Multiple workers"):
if i == num_examples:
break
# try splitting by node (we can't do this with `dataset_text` since `split_dataset_by_node` expects the Audio column for an ASR dataset)
dataset = split_dataset_by_node(dataset, rank=jax.process_index(), world_size=jax.process_count())
# remove the text column again
dataset_text = dataset.remove_columns(set(dataset.features.keys()) - {"text"})
dataloader = DataLoader(dataset_text, batch_size=16, num_workers=num_workers // 2, drop_last=True)
for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Split by node"):
if i == num_examples:
break
# too many workers
dataloader = DataLoader(dataset_text, batch_size=256, num_workers=num_workers, drop_last=True)
for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Too many workers"):
if i == num_examples:
break
```
</details>
<details>
<summary> With 5 examples: </summary>
```
Multiple workers: 100%|███████████████████████████████████████████████████████████████████| 5/5 [00:16<00:00, 3.33s/it]
Assigning 7 shards (or data sources) of the dataset to each node.
Split by node: 100%|██████████████████████████████████████████████████████████████████████| 5/5 [00:13<00:00, 2.76s/it]
Assigning 7 shards (or data sources) of the dataset to each node.
Too many dataloader workers: 14 (max is dataset.n_shards=7). Stopping 7 dataloader workers.
To parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary t
o have a number of workers greater than dataset.n_shards=7. To enable more parallelism, please split the dataset in more
files than 7.
Too many workers: 100%|███████████████████████████████████████████████████████████████████| 5/5 [00:15<00:00, 3.03s/it]
```
</details>
<details>
<summary> With 7 examples: </summary>
```
Multiple workers: 100%|███████████████████████████████████████████████████████████████████| 8/8 [00:13<00:00, 1.71s/it]
Assigning 7 shards (or data sources) of the dataset to each node.
Split by node: 100%|██████████████████████████████████████████████████████████████████████| 8/8 [00:11<00:00, 1.38s/it]
Assigning 7 shards (or data sources) of the dataset to each node.
Too many dataloader workers: 14 (max is dataset.n_shards=7). Stopping 7 dataloader workers.
To parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary to have a number of workers greater than dataset.n_shards=7. To enable more parallelism, please split the dataset in more files than 7.
Too many workers: 88%|██████████████████████████████████████████████████████████▋ | 7/8 [00:13<00:01, 1.89s/it]
Traceback (most recent call last):
File "distil-whisper/test_librispeech.py", line 36, in <module>
for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Too many workers"):
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/tqdm/std.py", line 1178, in __iter__
for obj in iterable:
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
data = self._next_data()
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1325, in _next_data
return self._process_data(data)
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
data.reraise()
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/_utils.py", line 644, in reraise
raise exception
IndexError: Caught IndexError in DataLoader worker process 7.
Original Traceback (most recent call last):
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch
data.append(next(self.dataset_iter))
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 986, in __iter__
yield from self._iter_pytorch(ex_iterable)
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 920, in _iter_pytorch
for key, example in ex_iterable.shard_data_sources(worker_info.id, worker_info.num_workers):
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 540, in shard_data_sources
self.ex_iterable.shard_data_sources(worker_id, num_workers),
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 796, in shard_data_sources
self.ex_iterable.shard_data_sources(worker_id, num_workers),
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 126, in shard_data_sources
requested_gen_kwargs = _merge_gen_kwargs([gen_kwargs_list[i] for i in shard_indices])
File "/home/sanchitgandhi/datasets/src/datasets/utils/sharding.py", line 76, in _merge_gen_kwargs
for key in gen_kwargs_list[0]
IndexError: list index out of range
```
</details>
### Expected behavior
Should pass for both 5 and 7 examples
### Environment info
- `datasets` version: 2.12.1.dev0
- Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5881/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5880
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5880/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5880/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5880/events
|
https://github.com/huggingface/datasets/issues/5880
| 1,719,090,101 |
I_kwDODunzps5mdzu1
| 5,880 |
load_dataset from s3 file system through streaming can't not iterate data
|
{
"login": "janineguo",
"id": 59083384,
"node_id": "MDQ6VXNlcjU5MDgzMzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/59083384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/janineguo",
"html_url": "https://github.com/janineguo",
"followers_url": "https://api.github.com/users/janineguo/followers",
"following_url": "https://api.github.com/users/janineguo/following{/other_user}",
"gists_url": "https://api.github.com/users/janineguo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/janineguo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/janineguo/subscriptions",
"organizations_url": "https://api.github.com/users/janineguo/orgs",
"repos_url": "https://api.github.com/users/janineguo/repos",
"events_url": "https://api.github.com/users/janineguo/events{/privacy}",
"received_events_url": "https://api.github.com/users/janineguo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-22T07:40:27 | 2023-05-26T12:52:08 | null |
CONTRIBUTOR
| null | null | null |
### Describe the bug
I have a JSON file in my s3 file system(minio), I can use load_dataset to get the file link, but I can't iterate it
<img width="816" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/cc0778d3-36f3-45b5-ac68-4e7c664c2ed0">
<img width="1144" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/76872af3-8b3c-42ff-9f55-528c920a7af1">
we can change 4 lines to fix this bug, you can check whether it is ok for us.
<img width="941" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/5a22155a-ece7-496c-8506-047e5c235cd3">
### Steps to reproduce the bug
1. storage a file in you s3 file system
2. use load_dataset to read it through streaming
3. iterate it
### Expected behavior
can iterate it successfully
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5880/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/datasets/issues/5880/timeline
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5878
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5878/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5878/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5878/events
|
https://github.com/huggingface/datasets/issues/5878
| 1,718,203,843 |
I_kwDODunzps5mabXD
| 5,878 |
Prefetching for IterableDataset
|
{
"login": "vyeevani",
"id": 30946190,
"node_id": "MDQ6VXNlcjMwOTQ2MTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/30946190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vyeevani",
"html_url": "https://github.com/vyeevani",
"followers_url": "https://api.github.com/users/vyeevani/followers",
"following_url": "https://api.github.com/users/vyeevani/following{/other_user}",
"gists_url": "https://api.github.com/users/vyeevani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vyeevani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vyeevani/subscriptions",
"organizations_url": "https://api.github.com/users/vyeevani/orgs",
"repos_url": "https://api.github.com/users/vyeevani/repos",
"events_url": "https://api.github.com/users/vyeevani/events{/privacy}",
"received_events_url": "https://api.github.com/users/vyeevani/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] |
[
"message",
"documentation_url"
] | 2023-05-20T15:25:40 | 2023-06-01T17:40:00 | null |
NONE
| null | null | null |
### Feature request
Add support for prefetching the next n batches through iterabledataset to reduce batch loading bottleneck in training loop.
### Motivation
The primary motivation behind this is to use hardware accelerators alongside a streaming dataset. This is required when you are in a low ram or low disk space setting as well as quick iteration where you're iterating though different accelerator environments (e.x changing ec2 instances quickly to figure out batch/sec for a particular architecture).
Currently, using the IterableDataset results in accelerators becoming basically useless due to the massive bottleneck induced by the dataset lazy loading/transform/mapping.
I've considered two alternatives:
PyTorch dataloader that handles this. However, I'm using jax, and I believe this is a piece of functionality that should live in the stream class.
Replicating the "num_workers" part of the PyTorch DataLoader to eagerly load batches and apply the transform so Arrow caching will automatically cache results and make them accessible.
### Your contribution
I may or may not have time to do this. Currently, I've written the basic multiprocessor approach to handle the eager DataLoader for my own use case with code that's not integrated to datasets. I'd definitely see this as being the default over the regular Dataset for most people given that they wouldn't have to wait on the datasets while also not worrying about performance.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5878/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5878/timeline
| null | null | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.