id
int64 599M
3.29B
| url
stringlengths 58
61
| html_url
stringlengths 46
51
| number
int64 1
7.72k
| title
stringlengths 1
290
| state
stringclasses 2
values | comments
int64 0
70
| created_at
timestamp[s]date 2020-04-14 10:18:02
2025-08-05 09:28:51
| updated_at
timestamp[s]date 2020-04-27 16:04:17
2025-08-05 11:39:56
| closed_at
timestamp[s]date 2020-04-14 12:01:40
2025-08-01 05:15:45
⌀ | user_login
stringlengths 3
26
| labels
listlengths 0
4
| body
stringlengths 0
228k
⌀ | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,413,623,687
|
https://api.github.com/repos/huggingface/datasets/issues/5134
|
https://github.com/huggingface/datasets/issues/5134
| 5,134
|
Raise ImportError instead of OSError if required extraction library is not installed
|
closed
| 2
| 2022-10-18T17:53:46
| 2022-10-25T15:56:59
| 2022-10-25T15:56:59
|
mariosasko
|
[
"enhancement",
"good first issue",
"hacktoberfest"
] |
According to the official Python docs, `OSError` should be thrown in the following situations:
> This exception is raised when a system function returns a system-related error, including I/O failures such as “file not found” or “disk full” (not for illegal argument types or other incidental errors).
Hence, it makes more sense to raise `ImportError` instead of `OSError` when the required extraction/decompression library is not installed.
| false
|
1,413,623,462
|
https://api.github.com/repos/huggingface/datasets/issues/5133
|
https://github.com/huggingface/datasets/issues/5133
| 5,133
|
Tensor operation not functioning in dataset mapping
|
closed
| 2
| 2022-10-18T17:53:35
| 2022-10-19T04:15:45
| 2022-10-19T04:15:44
|
xinghaow99
|
[
"bug"
] |
## Describe the bug
I'm doing a torch.mean() operation in data preprocessing, and it's not working.
## Steps to reproduce the bug
```
from transformers import pipeline
import torch
import numpy as np
from datasets import load_dataset
device = 'cuda:0'
raw_dataset = load_dataset("glue", "sst2")
feature_extraction = pipeline('feature-extraction', 'bert-base-uncased', device=device)
def extracted_data(examples):
# feature = torch.tensor(feature_extraction(examples['sentence'], batch_size=16), device=device)
# feature = torch.mean(feature, dim=1)
feature = np.asarray(feature_extraction(examples['sentence'], batch_size=16)).squeeze().mean(1)
print(feature.shape)
return {'feature': feature}
extracted_dataset = raw_dataset.map(extracted_data, batched=True, batch_size=16)
```
## Results
When running with torch.mean(), the shape printed out is [16, seq_len, 768], which is exactly the same before the operation. While numpy works just fine, which gives [16, 768].
## Environment info
- `datasets` version: 2.6.1
- Platform: Linux-4.4.0-142-generic-x86_64-with-glibc2.31
- Python version: 3.10.6
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| false
|
1,413,607,306
|
https://api.github.com/repos/huggingface/datasets/issues/5132
|
https://github.com/huggingface/datasets/issues/5132
| 5,132
|
Depracate `num_proc` parameter in `DownloadManager.extract`
|
closed
| 5
| 2022-10-18T17:41:05
| 2022-10-25T15:56:46
| 2022-10-25T15:56:46
|
mariosasko
|
[
"enhancement",
"good first issue",
"hacktoberfest"
] |
The `num_proc` parameter is only present in `DownloadManager.extract` but not in `StreamingDownloadManager.extract`, making it impossible to support streaming in the dataset scripts that use it (`openwebtext` and `the_pile_stack_exchange`). We can avoid this situation by deprecating this parameter and passing `DownloadConfig`'s `num_proc` to `map_nested` instead, as it's done in `DownloadManager.download`.
| false
|
1,413,534,863
|
https://api.github.com/repos/huggingface/datasets/issues/5131
|
https://github.com/huggingface/datasets/issues/5131
| 5,131
|
WikiText 103 tokenizer hangs
|
closed
| 1
| 2022-10-18T16:44:00
| 2023-08-08T08:42:40
| 2023-07-21T14:41:51
|
TrentBrick
|
[
"bug"
] |
See issue here: https://github.com/huggingface/transformers/issues/19702
| false
|
1,413,435,000
|
https://api.github.com/repos/huggingface/datasets/issues/5130
|
https://github.com/huggingface/datasets/pull/5130
| 5,130
|
Avoid extra cast in `class_encode_column`
|
closed
| 1
| 2022-10-18T15:31:24
| 2022-10-19T11:53:02
| 2022-10-19T11:50:46
|
mariosasko
|
[] |
Pass the updated features to `map` to avoid the `cast` in `class_encode_column`.
| true
|
1,413,031,664
|
https://api.github.com/repos/huggingface/datasets/issues/5129
|
https://github.com/huggingface/datasets/issues/5129
| 5,129
|
unexpected `cast` or `class_encode_column` result after `rename_column`
|
closed
| 4
| 2022-10-18T11:15:24
| 2022-10-19T03:02:26
| 2022-10-19T03:02:26
|
quaeast
|
[
"bug"
] |
## Describe the bug
When invoke `cast` or `class_encode_column` to a colunm renamed by `rename_column` , it will convert all the variables in this column into one variable. I also run this script in version 2.5.2, this bug does not appear. So I switched to the older version.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("amazon_reviews_multi", "en")
data = dataset['train']
data = data.remove_columns(
[
"review_id",
"product_id",
"reviewer_id",
"review_title",
"language",
"product_category",
]
)
data = data.rename_column("review_body", "text")
data1 = data.class_encode_column("stars")
print(set(data1.data.columns[0]))
# output: {<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>}
data = data.rename_column("stars", "label")
print(set(data.data.columns[0]))
# output: {<pyarrow.Int32Scalar: 5>, <pyarrow.Int32Scalar: 4>, <pyarrow.Int32Scalar: 1>, <pyarrow.Int32Scalar: 3>, <pyarrow.Int32Scalar: 2>}
data2 = data.class_encode_column("label")
print(set(data2.data.columns[0]))
# output: {<pyarrow.Int64Scalar: 0>}
```
## Expected results
the last print should be:
{<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>}
## Actual results
but it output:
{<pyarrow.Int64Scalar: 0>}
## Environment info
- `datasets` version: 2.6.1
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.10.6
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| false
|
1,412,783,855
|
https://api.github.com/repos/huggingface/datasets/issues/5128
|
https://github.com/huggingface/datasets/pull/5128
| 5,128
|
Make filename matching more robust
|
closed
| 3
| 2022-10-18T08:22:48
| 2022-10-28T13:07:38
| 2022-10-28T13:05:06
|
riccardobucco
|
[] |
Fix #5046
| true
|
1,411,897,544
|
https://api.github.com/repos/huggingface/datasets/issues/5127
|
https://github.com/huggingface/datasets/pull/5127
| 5,127
|
[WIP] WebDataset export
|
closed
| 2
| 2022-10-17T16:50:22
| 2024-01-11T06:27:04
| 2024-01-08T14:25:43
|
lhoestq
|
[] |
I added a first draft of the `IterableDataset.to_wds` method.
You can use it to savea dataset loaded in streamign mode as a webdataset locally.
The API can be further improved to allow to export to a cloud storage like the HF Hub.
I also included sharding with a default max shard size of 500MB (uncompressed), and it is single-processed fo rnow.
Choosing the number of shards is not implemented yet - though if we know the size of the `IterableDataset` this is probably doable`.
For example
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True)
>>> ds.to_wds("output_dir", compress=True)
>>> import webdataset as wds
>>> ds = wds.WebDataset("output_dir/rotten_tomatoes-train-000000.tar.gz").decode()
>>> next(iter(ds))
{'__key__': '0',
'__url__': 'output_dir/rotten_tomatoes-train-000000.tar.gz',
'label.cls': 1,
'text.txt': 'the rock is destined to be the 21st century\'s new ..., jean-claud van damme or steven segal .'}
```
### Implementation details
The WebDataset format is made of TAR archives containing a series of files per example. For example one pair of `image.jpg` and `label.cls` for image classification.
WebDataset automatically decodes serialized data based on the extension of the files, and output a dictionary. For example `{"image.png": np.array(...), "label.cls": 0}` if you choose the numpy decoding.
To use the automatic decoding, I store each field of each example as a file with its corresponding extension (jpg, json, cls, etc.)
While this is useful to end up with a dictionary with one key per column and appropriate decoding, it can create huge TAR archives if the dataset is made of small samples of text - probably because of useless TAR metadata for each file. This also makes loading super slow: iterating on SQuAD takes 50sec vs 7sec using `datasets` in streaming mode.
I haven't taken a look at alternatives for text datasets made out of small samples, but for image datasets this can already be used to run some benchmarks.
| true
|
1,411,757,124
|
https://api.github.com/repos/huggingface/datasets/issues/5126
|
https://github.com/huggingface/datasets/pull/5126
| 5,126
|
Fix class name of symbolic link
|
closed
| 4
| 2022-10-17T15:11:02
| 2022-11-14T14:40:18
| 2022-11-14T14:40:18
|
riccardobucco
|
[] |
Fix #5098
| true
|
1,411,602,813
|
https://api.github.com/repos/huggingface/datasets/issues/5125
|
https://github.com/huggingface/datasets/pull/5125
| 5,125
|
Add `pyproject.toml` for `black`
|
closed
| 1
| 2022-10-17T13:38:47
| 2024-11-20T13:36:11
| 2022-10-17T14:21:09
|
mariosasko
|
[] |
Add `pyproject.toml` as a config file for the `black` tool to support VS Code's auto-formatting on save (and to be more consistent with the other HF projects).
| true
|
1,411,159,725
|
https://api.github.com/repos/huggingface/datasets/issues/5124
|
https://github.com/huggingface/datasets/pull/5124
| 5,124
|
Install tensorflow-macos dependency conditionally
|
closed
| 1
| 2022-10-17T08:45:08
| 2022-10-19T09:12:17
| 2022-10-19T09:10:06
|
albertvillanova
|
[] |
Fix #5118.
| true
|
1,410,828,756
|
https://api.github.com/repos/huggingface/datasets/issues/5123
|
https://github.com/huggingface/datasets/issues/5123
| 5,123
|
datasets freezes with streaming mode in multiple-gpu
|
open
| 11
| 2022-10-17T03:28:16
| 2023-05-14T06:55:20
| null |
jackfeinmann5
|
[
"bug"
] |
## Describe the bug
Hi. I am using this dataloader, which is for processing large datasets in streaming mode mentioned in one of examples of huggingface. I am using it to read c4: https://github.com/huggingface/transformers/blob/b48ac1a094e572d6076b46a9e4ed3e0ebe978afc/examples/research_projects/codeparrot/scripts/codeparrot_training.py#L22
During using multi-gpu in accelerator in one node, the code freezes, but works for 1 GPU:
```
10/16/2022 14:18:46 - INFO - datasets.info - Loading Dataset Infos from /home/jack/.cache/huggingface/modules/datasets_modules/datasets/c4/df532b158939272d032cc63ef19cd5b83e9b4d00c922b833e4cb18b2e9869b01
Steps: 0%| | 0/400000 [00:00<?, ?it/s]10/16/2022 14:18:47 - INFO - torch.utils.data.dataloader - Shared seed (135290893754684706) sent to store on rank 0
```
# Code to reproduce
please run this code with `accelerate launch code.py`
```
from accelerate import Accelerator
from accelerate.logging import get_logger
from datasets import load_dataset
from torch.utils.data.dataloader import DataLoader
import torch
from datasets import load_dataset
from transformers import AutoTokenizer
import torch
from accelerate.logging import get_logger
from torch.utils.data import IterableDataset
from torch.utils.data.datapipes.iter.combinatorics import ShufflerIterDataPipe
logger = get_logger(__name__)
class ConstantLengthDataset(IterableDataset):
"""
Iterable dataset that returns constant length chunks of tokens from stream of text files.
Args:
tokenizer (Tokenizer): The processor used for proccessing the data.
dataset (dataset.Dataset): Dataset with text files.
infinite (bool): If True the iterator is reset after dataset reaches end else stops.
max_seq_length (int): Length of token sequences to return.
num_of_sequences (int): Number of token sequences to keep in buffer.
chars_per_token (int): Number of characters per token used to estimate number of tokens in text buffer.
"""
def __init__(
self,
tokenizer,
dataset,
infinite=False,
max_seq_length=1024,
num_of_sequences=1024,
chars_per_token=3.6,
):
self.tokenizer = tokenizer
# self.concat_token_id = tokenizer.bos_token_id
self.dataset = dataset
self.max_seq_length = max_seq_length
self.epoch = 0
self.infinite = infinite
self.current_size = 0
self.max_buffer_size = max_seq_length * chars_per_token * num_of_sequences
self.content_field = "text"
def __iter__(self):
iterator = iter(self.dataset)
more_examples = True
while more_examples:
buffer, buffer_len = [], 0
while True:
if buffer_len >= self.max_buffer_size:
break
try:
buffer.append(next(iterator)[self.content_field])
buffer_len += len(buffer[-1])
except StopIteration:
if self.infinite:
iterator = iter(self.dataset)
self.epoch += 1
logger.info(f"Dataset epoch: {self.epoch}")
else:
more_examples = False
break
tokenized_inputs = self.tokenizer(buffer, truncation=False)["input_ids"]
all_token_ids = []
for tokenized_input in tokenized_inputs:
all_token_ids.extend(tokenized_input)
for i in range(0, len(all_token_ids), self.max_seq_length):
input_ids = all_token_ids[i : i + self.max_seq_length]
if len(input_ids) == self.max_seq_length:
self.current_size += 1
yield torch.tensor(input_ids)
def shuffle(self, buffer_size=1000):
return ShufflerIterDataPipe(self, buffer_size=buffer_size)
def create_dataloaders(tokenizer, accelerator):
ds_kwargs = {"streaming": True}
# In distributed training, the load_dataset function gaurantees that only one process
# can concurrently download the dataset.
datasets = load_dataset(
"c4",
"en",
cache_dir="cache_dir",
**ds_kwargs,
)
train_data, valid_data = datasets["train"], datasets["validation"]
with accelerator.main_process_first():
train_data = train_data.shuffle(buffer_size=10000, seed=None)
train_dataset = ConstantLengthDataset(
tokenizer,
train_data,
infinite=True,
max_seq_length=256,
)
valid_dataset = ConstantLengthDataset(
tokenizer,
valid_data,
infinite=False,
max_seq_length=256,
)
train_dataset = train_dataset.shuffle(buffer_size=10000)
train_dataloader = DataLoader(train_dataset, batch_size=160, shuffle=True)
eval_dataloader = DataLoader(valid_dataset, batch_size=160)
return train_dataloader, eval_dataloader
def main():
# Accelerator.
logging_dir = "data_save_dir/log"
accelerator = Accelerator(
gradient_accumulation_steps=1,
mixed_precision="bf16",
log_with="tensorboard",
logging_dir=logging_dir,
)
# We need to initialize the trackers we use, and also store our configuration.
# The trackers initializes automatically on the main process.
if accelerator.is_main_process:
accelerator.init_trackers("test")
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
# Load datasets and create dataloaders.
train_dataloader, _ = create_dataloaders(tokenizer, accelerator)
train_dataloader = accelerator.prepare(train_dataloader)
for step, batch in enumerate(train_dataloader, start=1):
print(step)
accelerator.end_training()
if __name__ == "__main__":
main()
```
## Results expected
Being able to run the code for streamining datasets with multi-gpu
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.2
- Platform: linux
- Python version: 3.9.12
- PyArrow version: 9.0.0
@lhoestq I do not have any idea why this freezing happens, and I removed the streaming mode and this was working fine, so I know this is caused by streaming mode of the dataloader part not working well with multi-gpu setting. Since datasets are large, I hope to keep the streamining mode. I very much appreciate your help.
| false
|
1,410,732,403
|
https://api.github.com/repos/huggingface/datasets/issues/5122
|
https://github.com/huggingface/datasets/pull/5122
| 5,122
|
Add warning
|
closed
| 1
| 2022-10-17T01:30:37
| 2022-11-05T12:23:53
| 2022-11-05T12:23:53
|
Salehbigdeli
|
[] |
Fixes: #5105
I think removing the directory with warning is a better solution for this issue. Because if we decide to keep existing files in directory, then we should deal with the case providing same directory for several datasets! Which we know is not possible since `dataset_info.json` exists in that directory.
| true
|
1,410,681,067
|
https://api.github.com/repos/huggingface/datasets/issues/5121
|
https://github.com/huggingface/datasets/pull/5121
| 5,121
|
Bugfix ignore function when creating new_fingerprint for caching
|
closed
| 1
| 2022-10-17T00:03:43
| 2022-10-17T12:39:36
| 2022-10-17T12:39:36
|
Salehbigdeli
|
[] |
maybe fixes: #5109
| true
|
1,410,641,221
|
https://api.github.com/repos/huggingface/datasets/issues/5120
|
https://github.com/huggingface/datasets/pull/5120
| 5,120
|
Fix `tqdm` zip bug
|
closed
| 11
| 2022-10-16T22:19:18
| 2022-10-23T10:27:53
| 2022-10-19T08:53:17
|
david1542
|
[] |
This PR solves #5117, by wrapping the entire `zip` clause in tqdm.
For more information, please checkout this Stack Overflow thread:
https://stackoverflow.com/questions/41171191/tqdm-progressbar-and-zip-built-in-do-not-work-together
| true
|
1,410,561,363
|
https://api.github.com/repos/huggingface/datasets/issues/5119
|
https://github.com/huggingface/datasets/pull/5119
| 5,119
|
[TYPO] Update new_dataset_script.py
|
closed
| 1
| 2022-10-16T17:36:49
| 2022-10-19T09:48:19
| 2022-10-19T09:45:59
|
cakiki
|
[] | null | true
|
1,410,547,373
|
https://api.github.com/repos/huggingface/datasets/issues/5118
|
https://github.com/huggingface/datasets/issues/5118
| 5,118
|
Installing `datasets` on M1 computers
|
closed
| 1
| 2022-10-16T16:50:08
| 2022-10-19T09:10:08
| 2022-10-19T09:10:08
|
david1542
|
[
"bug"
] |
## Describe the bug
I wanted to install `datasets` dependencies on my M1 (in order to start contributing to the project). However, I got an error regarding `tensorflow`.
On M1, `tensorflow-macos` needs to be installed instead. Can we add a conditional requirement, so that `tensorflow-macos` would be installed on M1?
## Steps to reproduce the bug
Fresh clone this project (on m1), create a virtualenv and run this:
```python
pip install -e ".[dev]"
```
## Expected results
Installation should be smooth, and all the dependencies should be installed on M1.
## Actual results
You should receive an error, saying pip couldn't find a version that matches this pattern:
```
tensorflow>=2.3,!=2.6.0,!=2.6.1
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.6.2.dev0
- Platform: macOS-12.6-arm64-arm-64bit
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.5.0
| false
|
1,409,571,346
|
https://api.github.com/repos/huggingface/datasets/issues/5117
|
https://github.com/huggingface/datasets/issues/5117
| 5,117
|
Progress bars have color red and never completed to 100%
|
closed
| 5
| 2022-10-14T16:12:30
| 2024-06-19T19:03:42
| 2022-10-23T12:58:41
|
echatzikyriakidis
|
[
"bug"
] |
## Describe the bug
Progress bars after transformative operations turn in red and never be completed to 100%
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('rotten_tomatoes', split='test').filter(lambda o: True)
```
## Expected results
Progress bar should be 100% and green
## Actual results
Progress bar turn in red and never completed to 100%
## Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.14
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| false
|
1,409,549,471
|
https://api.github.com/repos/huggingface/datasets/issues/5116
|
https://github.com/huggingface/datasets/pull/5116
| 5,116
|
Use yaml for issue templates + revamp
|
closed
| 1
| 2022-10-14T15:53:13
| 2022-10-19T13:05:49
| 2022-10-19T13:03:22
|
mariosasko
|
[] |
Use YAML instead of markdown (more expressive) for the issue templates. In addition, update their structure/fields to be more aligned with Transformers.
PS: also removes the "add_dataset" PR template, as we no longer accept such PRs.
| true
|
1,409,250,020
|
https://api.github.com/repos/huggingface/datasets/issues/5115
|
https://github.com/huggingface/datasets/pull/5115
| 5,115
|
Fix iter_batches
|
closed
| 3
| 2022-10-14T12:06:14
| 2022-10-14T15:02:15
| 2022-10-14T14:59:58
|
lhoestq
|
[] |
The `pa.Table.to_reader()` method available in `pyarrow>=8.0.0` may return chunks of size < `max_chunksize`, therefore `iter_batches` can return batches smaller than the `batch_size` specified by the user
Therefore batched `map` couldn't always use batches of the right size, e.g. this fails because it runs only on one batch of one element:
```python
from datasets import Dataset, concatenate_datasets
ds = concatenate_datasets([Dataset.from_dict({"a": [i]}) for i in range(10)])
ds2 = ds.map(lambda _: {}, batched=True)
assert list(ds2) == list(ds)
```
This was introduced in https://github.com/huggingface/datasets/pull/5030
Close https://github.com/huggingface/datasets/issues/5111
This will require a patch release along with https://github.com/huggingface/datasets/pull/5113
TODO:
- [x] fix tests
- [x] add more tests
| true
|
1,409,236,738
|
https://api.github.com/repos/huggingface/datasets/issues/5114
|
https://github.com/huggingface/datasets/issues/5114
| 5,114
|
load_from_disk with remote filesystem fails due to a wrong temporary local folder path
|
open
| 2
| 2022-10-14T11:54:53
| 2022-11-19T07:13:10
| null |
bruno-hays
|
[
"bug"
] |
## Describe the bug
The function load_from_disk fails when using a remote filesystem because of a wrong temporary path generation in the load_from_disk method of arrow_dataset.py:
```python
if is_remote_filesystem(fs):
src_dataset_path = extract_path_from_uri(dataset_path)
dataset_path = Dataset._build_local_temp_path(src_dataset_path)
fs.download(src_dataset_path, dataset_path.as_posix(), recursive=True)
```
If _dataset_path_ is `gs://speech/mydataset/train`, then _src_dataset_path_ will be `speech/mydataset/train` and _dataset_path_ will be something like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train`
Then, after downloading the **folder** _src_dataset_path_, you will get a path like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train/train/state.json` (notice we have train twice)
Instead of downloading the remote folder we should be downloading all the files in the folder for the path to be right:
```python
fs.download(os.path.join(src_dataset_path,*), dataset_path.as_posix(), recursive=True)
```
## Steps to reproduce the bug
```python
fs = gcsfs.GCSFileSystem(**storage_options)
dataset = load_from_disk("common_voice_processed") # loading local dataset previously saved locally, works fine
dataset.save_to_disk(output_dir, fs=fs) #works fine
dataset = load_from_disk(output_dir, fs=fs) # crashes
```
## Expected results
The dataset is loaded
## Actual results
FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/9s/gf0b9jz15d517yrf7m3nvlxr0000gn/T/tmp6t5e221_/speech/datasets/tests/common_voice_processed/train/state.json'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets-2.6.1.dev0
- Platform: mac os monterey 12.5.1
- Python version: 3.8.13
- PyArrow version:pyarrow==9.0.0
| false
|
1,409,207,607
|
https://api.github.com/repos/huggingface/datasets/issues/5113
|
https://github.com/huggingface/datasets/pull/5113
| 5,113
|
Fix filter indices when batched
|
closed
| 3
| 2022-10-14T11:30:03
| 2022-10-24T06:21:09
| 2022-10-14T12:11:44
|
albertvillanova
|
[] |
This PR fixes a bug introduced by:
- #5030
Fix #5112.
| true
|
1,409,143,409
|
https://api.github.com/repos/huggingface/datasets/issues/5112
|
https://github.com/huggingface/datasets/issues/5112
| 5,112
|
Bug with filtered indices
|
closed
| 3
| 2022-10-14T10:35:47
| 2022-10-14T13:55:03
| 2022-10-14T12:11:45
|
albertvillanova
|
[
"bug"
] |
## Describe the bug
As reported by @PartiallyTyped (and by @Muennighoff):
- https://github.com/huggingface/datasets/issues/5111#issuecomment-1278652524
There is an issue with the indices of a filtered dataset.
## Steps to reproduce the bug
```python
ds = Dataset.from_dict({"num": [0, 1, 2, 3]})
ds = ds.filter(lambda num: num % 2 == 0, input_columns="num", batch_size=2)
assert all(item["num"] % 2 == 0 for item in ds)
```
## Expected results
The indices of the filtered dataset should correspond to the examples with "language" equals to "english".
## Actual results
Indices to items with other languages are included in the filtered dataset indices
## Preliminar investigation
It seems a bug introduced by:
- #5030
| false
|
1,408,143,170
|
https://api.github.com/repos/huggingface/datasets/issues/5111
|
https://github.com/huggingface/datasets/issues/5111
| 5,111
|
map and filter not working properly in multiprocessing with the new release 2.6.0
|
closed
| 14
| 2022-10-13T17:00:55
| 2022-10-17T08:26:59
| 2022-10-14T14:59:59
|
loubnabnl
|
[
"bug"
] |
## Describe the bug
When mapping is used on a dataset with more than one process, there is a weird behavior when trying to use `filter` , it's like only the samples from one worker are retrieved, one needs to specify the same `num_proc` in filter for it to work properly. This doesn't happen with `datasets` version 2.5.2
In the code below the data is filtered differently when we increase `num_proc` used in `map` although the datsets before and after mapping have identical elements.
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
def preprocess(example):
return example
ds = load_dataset("codeparrot/codeparrot-clean-valid", split="train").select([i for i in range(10)])
ds1 = ds.map(preprocess, num_proc=2)
ds2 = ds.map(preprocess)
# the datasets elements are the same
for i in range(len(ds1)):
assert ds1[i]==ds2[i]
print(f'Target column before filtering {ds1["autogenerated"]}')
print(f'Target column before filtering {ds2["autogenerated"]}')
print(f"datasets version {datasets.__version__}")
ds_filtered_1 = ds1.filter(lambda x: not x["autogenerated"])
ds_filtered_2 = ds2.filter(lambda x: not x["autogenerated"])
# all elements in Target column are false so they should all be kept, but for ds2 only the first 5=num_samples/num_proc are kept
print(ds_filtered_1)
print(ds_filtered_2)
```
```
Target column before filtering [False, False, False, False, False, False, False, False, False, False]
Target column before filtering [False, False, False, False, False, False, False, False, False, False]
Dataset({
features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'],
num_rows: 5
})
Dataset({
features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'],
num_rows: 10
})
```
## Expected results
Increasing `num_proc` in mapping shouldn't alter filtering. With the previous version 2.5.2 this doesn't happen
## Actual results
Filtering doesn't work properly when we increase `num_proc` in mapping but not when calling `filter`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.6.0
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| false
|
1,407,434,706
|
https://api.github.com/repos/huggingface/datasets/issues/5109
|
https://github.com/huggingface/datasets/issues/5109
| 5,109
|
Map caching not working for some class methods
|
closed
| 2
| 2022-10-13T09:12:58
| 2022-10-17T10:38:45
| 2022-10-17T10:38:45
|
Mouhanedg56
|
[
"bug"
] |
## Describe the bug
The cache loading is not working as expected for some class methods with a model stored in an attribute.
The new fingerprint for `_map_single` is not the same at each run. The hasher generate a different hash for the class method.
This comes from `dumps` function in `datasets.utils.py_utils` which generates a different dump at each run.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from transformers import AutoConfig, AutoModel, AutoTokenizer
dataset = load_dataset("ethos", "binary")
BASE_MODELNAME = "sentence-transformers/all-MiniLM-L6-v2"
class Object:
def __init__(self):
config = AutoConfig.from_pretrained(BASE_MODELNAME)
self.bert = AutoModel.from_config(config=config, add_pooling_layer=False)
self.tok = AutoTokenizer.from_pretrained(BASE_MODELNAME)
def tokenize(self, examples):
tokenized_texts = self.tok(
examples["text"],
padding="max_length",
truncation=True,
max_length=256,
)
return tokenized_texts
instance = Object()
result = dict()
for phase in ["train"]:
result[phase] = dataset[phase].map(instance.tokenize, batched=True, load_from_cache_file=True, num_proc=2)
```
## Expected results
Load cache instead of recompute result.
## Actual results
Result recomputed from scratch at each run.
The cache works fine when deleting `bert` attribute.
## Environment info
- `datasets` version: 2.5.3.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.13
- PyArrow version: 7.0.0
- Pandas version: 1.5.0
| false
|
1,407,044,107
|
https://api.github.com/repos/huggingface/datasets/issues/5108
|
https://github.com/huggingface/datasets/pull/5108
| 5,108
|
Fix a typo in arrow_dataset.py
|
closed
| 0
| 2022-10-13T02:33:55
| 2022-10-14T09:47:28
| 2022-10-14T09:47:27
|
yangky11
|
[] | null | true
|
1,406,736,710
|
https://api.github.com/repos/huggingface/datasets/issues/5107
|
https://github.com/huggingface/datasets/pull/5107
| 5,107
|
Multiprocessed dataset builder
|
closed
| 17
| 2022-10-12T19:59:17
| 2022-12-01T15:37:09
| 2022-11-09T17:11:43
|
TevenLeScao
|
[] |
This PR adds the multiprocessing part of #2650 (but not the caching of already-computed arrow files). On the other side, loading of sharded arrow files still needs to be implemented (sharded parquet files can already be loaded).
| true
|
1,406,635,758
|
https://api.github.com/repos/huggingface/datasets/issues/5106
|
https://github.com/huggingface/datasets/pull/5106
| 5,106
|
Fix task template reload from dict
|
closed
| 2
| 2022-10-12T18:33:49
| 2022-10-13T09:59:07
| 2022-10-13T09:56:51
|
lhoestq
|
[] |
Since #4926 the JSON dumps are simplified and it made task template dicts empty by default.
I fixed this by always including the task name which is needed to reload a task from a dict
| true
|
1,406,078,357
|
https://api.github.com/repos/huggingface/datasets/issues/5105
|
https://github.com/huggingface/datasets/issues/5105
| 5,105
|
Specifying an exisiting folder in download_and_prepare deletes everything in it
|
open
| 5
| 2022-10-12T11:53:33
| 2022-10-20T11:53:59
| null |
cakiki
|
[
"bug"
] |
## Describe the bug
The builder correctly creates the `output_dir` folder if it doesn't exist, but if the folder exists everything within it is deleted. Specifying `"."` as the `output_dir` deletes everything in your current dir but also leads to **another bug** whose traceback is the following:
```
Traceback (most recent call last)
Input In [11], in <cell line: 1>()
----> 1 rotten_tomatoes_builder.download_and_prepare(output_dir=".", max_shard_size="200MB", file_format="parquet")
File ~/BIGSCIENCE/env/lib/python3.9/site-packages/datasets/builder.py:818, in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs)
File /usr/lib/python3.9/contextlib.py:124, in _GeneratorContextManager.__exit__(self, type, value, traceback)
122 if type is None:
123 try:
--> 124 next(self.gen)
125 except StopIteration:
126 return False
File ~/BIGSCIENCE/env/lib/python3.9/site-packages/datasets/builder.py:760, in incomplete_dir(dirname)
File /usr/lib/python3.9/shutil.py:722, in rmtree(path, ignore_errors, onerror)
720 os.rmdir(path)
721 except OSError:
--> 722 onerror(os.rmdir, path, sys.exc_info())
723 else:
724 try:
725 # symlinks to directories are forbidden, see bug #1669
File /usr/lib/python3.9/shutil.py:720, in rmtree(path, ignore_errors, onerror)
718 _rmtree_safe_fd(fd, path, onerror)
719 try:
--> 720 os.rmdir(path)
721 except OSError:
722 onerror(os.rmdir, path, sys.exc_info())
OSError: [Errno 22] Invalid argument: '/home/christopher/BIGSCIENCE/.'
```
## Steps to reproduce the bug
```python
rotten_tomatoes_builder = load_dataset_builder("rotten_tomatoes")
rotten_tomatoes_builder.download_and_prepare(output_dir="./test_folder", max_shard_size="200MB", file_format="parquet")
```
If `test_folder` contains any files they will all be deleted
## Expected results
Either a warning that all files will be deleted, but preferably that they not be deleted at all.
## Actual results
N/A
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| false
|
1,405,973,102
|
https://api.github.com/repos/huggingface/datasets/issues/5104
|
https://github.com/huggingface/datasets/pull/5104
| 5,104
|
Fix loading how to guide (#5102)
|
closed
| 1
| 2022-10-12T10:34:42
| 2022-10-12T11:34:07
| 2022-10-12T11:31:55
|
riccardobucco
|
[] | null | true
|
1,405,956,311
|
https://api.github.com/repos/huggingface/datasets/issues/5103
|
https://github.com/huggingface/datasets/pull/5103
| 5,103
|
url encode hub url (#5099)
|
closed
| 1
| 2022-10-12T10:22:12
| 2022-10-12T15:27:24
| 2022-10-12T15:24:47
|
riccardobucco
|
[] | null | true
|
1,404,746,554
|
https://api.github.com/repos/huggingface/datasets/issues/5102
|
https://github.com/huggingface/datasets/issues/5102
| 5,102
|
Error in create a dataset from a Python generator
|
closed
| 2
| 2022-10-11T14:28:58
| 2022-10-12T11:31:56
| 2022-10-12T11:31:56
|
yangxuhui
|
[
"bug",
"good first issue",
"hacktoberfest"
] |
## Describe the bug
In HOW-TO-GUIDES > Load > [Python generator](https://huggingface.co/docs/datasets/v2.5.2/en/loading#python-generator), the code example defines the `my_gen` function, but when creating the dataset, an undefined `my_dict` is passed in.
```Python
>>> from datasets import Dataset
>>> def my_gen():
... for i in range(1, 4):
... yield {"a": i}
>>> dataset = Dataset.from_generator(my_dict)
```
| false
|
1,404,513,085
|
https://api.github.com/repos/huggingface/datasets/issues/5101
|
https://github.com/huggingface/datasets/pull/5101
| 5,101
|
Free the "hf" filesystem protocol for `hffs`
|
closed
| 1
| 2022-10-11T11:57:21
| 2022-10-12T15:32:59
| 2022-10-12T15:30:38
|
lhoestq
|
[] | null | true
|
1,404,458,586
|
https://api.github.com/repos/huggingface/datasets/issues/5100
|
https://github.com/huggingface/datasets/issues/5100
| 5,100
|
datasets[s3] sagemaker can't run a model - datasets issue with Value and ClassLabel and cast() method
|
closed
| 0
| 2022-10-11T11:16:31
| 2022-10-11T13:48:26
| 2022-10-11T13:48:26
|
jagochi
|
[] | null | false
|
1,404,370,191
|
https://api.github.com/repos/huggingface/datasets/issues/5099
|
https://github.com/huggingface/datasets/issues/5099
| 5,099
|
datasets doesn't support # in data paths
|
closed
| 9
| 2022-10-11T10:05:32
| 2022-10-13T13:14:20
| 2022-10-13T13:14:20
|
loubnabnl
|
[
"bug",
"good first issue",
"hacktoberfest"
] |
## Describe the bug
dataset files with `#` symbol their paths aren't read correctly.
## Steps to reproduce the bug
The data in folder `c#`of this [dataset](https://huggingface.co/datasets/loubnabnl/bigcode_csharp) can't be loaded. While the folder `c_sharp` with the same data is loaded properly
```python
ds = load_dataset('loubnabnl/bigcode_csharp', split="train", data_files=["data/c#/*"])
```
```
FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/loubnabnl/bigcode_csharp/resolve/27a3166cff4bb18e11919cafa6f169c0f57483de/data/c#/data_0003.jsonl
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.2
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
cc @lhoestq
| false
|
1,404,058,518
|
https://api.github.com/repos/huggingface/datasets/issues/5098
|
https://github.com/huggingface/datasets/issues/5098
| 5,098
|
Classes label error when loading symbolic links using imagefolder
|
closed
| 3
| 2022-10-11T06:10:58
| 2022-11-14T14:40:20
| 2022-11-14T14:40:20
|
horizon86
|
[
"enhancement",
"good first issue",
"hacktoberfest"
] |
**Is your feature request related to a problem? Please describe.**
Like this: #4015
When there are **symbolic links** to pictures in the data folder, the parent folder name of the **real file** will be used as the class name instead of the parent folder of the symbolic link itself. Can you give an option to decide whether to enable symbolic link tracking?
This is inconsistent with the `torchvision.datasets.ImageFolder` behavior.
For example:


It use `others` in green circle as class label but not `abnormal`, I wish `load_dataset` not use the real file parent as label.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context about the feature request here.
| false
|
1,403,679,353
|
https://api.github.com/repos/huggingface/datasets/issues/5097
|
https://github.com/huggingface/datasets/issues/5097
| 5,097
|
Fatal error with pyarrow/libarrow.so
|
closed
| 1
| 2022-10-10T20:29:04
| 2022-10-11T06:56:01
| 2022-10-11T06:56:00
|
catalys1
|
[
"bug"
] |
## Describe the bug
When using datasets, at the very end of my jobs the program crashes (see trace below).
It doesn't seem to affect anything, as it appears to happen as the program is closing down. Just importing `datasets` is enough to cause the error.
## Steps to reproduce the bug
This is sufficient to reproduce the problem:
```bash
python -c "import datasets"
```
## Expected results
Program should run to completion without an error.
## Actual results
```bash
Fatal error condition occurred in /opt/vcpkg/buildtrees/aws-c-io/src/9e6648842a-364b708815.clean/source/event_loop.c:72: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS
Exiting Application
################################################################################
Stack trace:
################################################################################
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200af06) [0x150dff547f06]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x20028e5) [0x150dff53f8e5]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1f27e09) [0x150dff464e09]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200ba3d) [0x150dff548a3d]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1f25948) [0x150dff462948]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200ba3d) [0x150dff548a3d]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1ee0b46) [0x150dff41db46]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x194546a) [0x150dfee8246a]
/lib64/libc.so.6(+0x39b0c) [0x150e15eadb0c]
/lib64/libc.so.6(on_exit+0) [0x150e15eadc40]
/u/user/miniconda3/envs/env/bin/python(+0x28db18) [0x560ae370eb18]
/u/user/miniconda3/envs/env/bin/python(+0x28db4b) [0x560ae370eb4b]
/u/user/miniconda3/envs/env/bin/python(+0x28db90) [0x560ae370eb90]
/u/user/miniconda3/envs/env/bin/python(_PyRun_SimpleFileObject+0x1e6) [0x560ae37123e6]
/u/user/miniconda3/envs/env/bin/python(_PyRun_AnyFileObject+0x44) [0x560ae37124c4]
/u/user/miniconda3/envs/env/bin/python(Py_RunMain+0x35d) [0x560ae37135bd]
/u/user/miniconda3/envs/env/bin/python(Py_BytesMain+0x39) [0x560ae37137d9]
/lib64/libc.so.6(__libc_start_main+0xf3) [0x150e15e97493]
/u/user/miniconda3/envs/env/bin/python(+0x2125d4) [0x560ae36935d4]
Aborted (core dumped)
```
## Environment info
- `datasets` version: 2.5.1
- Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.4
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
| false
|
1,403,379,816
|
https://api.github.com/repos/huggingface/datasets/issues/5096
|
https://github.com/huggingface/datasets/issues/5096
| 5,096
|
Transfer some canonical datasets under an organization namespace
|
closed
| 11
| 2022-10-10T15:44:31
| 2024-06-24T06:06:28
| 2024-06-24T06:02:45
|
albertvillanova
|
[
"dataset contribution"
] |
As discussed during our @huggingface/datasets meeting, we are planning to move some "canonical" dataset scripts under their corresponding organization namespace (if this does not exist).
On the contrary, if the dataset already exists under the organization namespace, we are deprecating the canonical one (and eventually delete it).
First, we should test it using a dummy dataset/organization.
TODO:
- [x] Test with a dummy dataset
- [x] Create dummy canonical dataset: https://huggingface.co/datasets/dummy_canonical_dataset
- [x] Create dummy organization: https://huggingface.co/dummy-canonical-org
- [x] Transfer dummy canonical dataset to dummy organization
- [ ] Transfer datasets
- [x] babi_qa => facebook
- [x] blbooks => TheBritishLibrary/blbooks
- [x] blbooksgenre => TheBritishLibrary/blbooksgenre
- [x] common_gen => allenai
- [x] commonsense_qa => tau
- [x] competition_math => hendrycks/competition_math
- [x] cord19 => allenai
- [x] emotion => dair-ai
- [ ] gem => GEM
- [x] hellaswag => Rowan
- [x] hendrycks_test => cais/mmlu
- [x] indonlu => indonlp
- [ ] multilingual_librispeech => facebook
- It already exists "facebook/multilingual_librispeech"
- [ ] oscar => oscar-corpus
- [x] peer_read => allenai
- [x] qasper => allenai
- [x] reddit => webis/tldr-17
- [x] russian_super_glue => russiannlp
- [x] rvl_cdip => aharley
- [x] s2orc => allenai
- [x] scicite => allenai
- [x] scifact => allenai
- [x] scitldr => allenai
- [x] swiss_judgment_prediction => rcds
- [x] the_pile => EleutherAI
- [ ] wmt14, wmt15, wmt16, wmt17, wmt18, wmt19,... => wmt
- [ ] Deprecate (and eventually remove) datasets that cannot be transferred because they already exist
- [x] banking77 => PolyAI
- [x] common_voice => mozilla-foundation
- [x] german_legal_entity_recognition => elenanereiss
- ...
EDIT: the list above is continuously being updated
| false
|
1,403,221,408
|
https://api.github.com/repos/huggingface/datasets/issues/5095
|
https://github.com/huggingface/datasets/pull/5095
| 5,095
|
Fix tutorial (#5093)
|
closed
| 2
| 2022-10-10T13:55:15
| 2022-10-10T17:50:52
| 2022-10-10T15:32:20
|
riccardobucco
|
[] |
Close #5093
| true
|
1,403,214,950
|
https://api.github.com/repos/huggingface/datasets/issues/5094
|
https://github.com/huggingface/datasets/issues/5094
| 5,094
|
Multiprocessing with `Dataset.map` and `PyTorch` results in deadlock
|
closed
| 11
| 2022-10-10T13:50:56
| 2023-07-24T15:29:13
| 2023-07-24T15:29:13
|
RR-28023
|
[
"bug"
] |
## Describe the bug
There seems to be an issue with using multiprocessing with `datasets.Dataset.map` (i.e. setting `num_proc` to a value greater than one) combined with a function that uses `torch` under the hood. The subprocesses that `datasets.Dataset.map` spawns [a this step](https://github.com/huggingface/datasets/blob/1b935dab9d2f171a8c6294269421fe967eb55e34/src/datasets/arrow_dataset.py#L2663) go into wait mode forever.
## Steps to reproduce the bug
The below code goes into deadlock when `NUMBER_OF_PROCESSES` is greater than one.
```python
NUMBER_OF_PROCESSES = 2
from transformers import AutoTokenizer, AutoModel
from datasets import load_dataset
dataset = load_dataset("glue", "mrpc", split="train")
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
model = AutoModel.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
model.to("cpu")
def cls_pooling(model_output):
return model_output.last_hidden_state[:, 0]
def generate_embeddings_batched(examples):
sentences_batch = list(examples['sentence1'])
encoded_input = tokenizer(
sentences_batch, padding=True, truncation=True, return_tensors="pt"
)
encoded_input = {k: v.to("cpu") for k, v in encoded_input.items()}
model_output = model(**encoded_input)
embeddings = cls_pooling(model_output)
examples['embeddings'] = embeddings.detach().cpu().numpy() # 64, 384
return examples
embeddings_dataset = dataset.map(
generate_embeddings_batched,
batched=True,
batch_size=10,
num_proc=NUMBER_OF_PROCESSES
)
```
While debugging it I've seen that it gets "stuck" when calling `torch.nn.Embedding.forward` but some testing shows that the same happens with other functions from `torch.nn`.
## Environment info
- Platform: Linux-5.14.0-1052-oem-x86_64-with-glibc2.31
- Python version: 3.9.14
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
Not sure if this is a HF problem, a PyTorch problem or something I'm doing wrong..
Thanks!
| false
|
1,402,939,660
|
https://api.github.com/repos/huggingface/datasets/issues/5093
|
https://github.com/huggingface/datasets/issues/5093
| 5,093
|
Mismatch between tutoriel and doc
|
closed
| 3
| 2022-10-10T10:23:53
| 2022-10-10T17:51:15
| 2022-10-10T17:51:14
|
clefourrier
|
[
"bug",
"good first issue",
"hacktoberfest"
] |
## Describe the bug
In the "Process text data" tutorial, [`map` has `return_tensors` as kwarg](https://huggingface.co/docs/datasets/main/en/nlp_process#map). It does not seem to appear in the [function documentation](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map), nor to work.
## Steps to reproduce the bug
MWE:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
from datasets import load_dataset
dataset = load_dataset("lhoestq/demo1", split="train")
dataset = dataset.map(lambda examples: tokenizer(examples["review"]), batched=True, return_tensors="pt")
```
## Expected results
return_tensors to be a valid kwarg :smiley:
## Actual results
```python
>> TypeError: map() got an unexpected keyword argument 'return_tensors'
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.14.0-1052-oem-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| false
|
1,402,713,517
|
https://api.github.com/repos/huggingface/datasets/issues/5092
|
https://github.com/huggingface/datasets/pull/5092
| 5,092
|
Use HTML relative paths for tiles in the docs
|
closed
| 3
| 2022-10-10T07:24:27
| 2022-10-11T13:25:45
| 2022-10-11T13:23:23
|
lewtun
|
[] |
This PR replaces the absolute paths in the landing page tiles with relative ones so that one can test navigation both locally in and in future PRs (see [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5084/en/index) for an example PR where the links don't work).
I encountered this while working on the `optimum` docs and figured I'd fix it elsewhere too :)
Internal Slack thread: https://huggingface.slack.com/archives/C02GLJ5S0E9/p1665129710176619
| true
|
1,401,112,552
|
https://api.github.com/repos/huggingface/datasets/issues/5091
|
https://github.com/huggingface/datasets/pull/5091
| 5,091
|
Allow connection objects in `from_sql` + small doc improvement
|
closed
| 1
| 2022-10-07T12:39:44
| 2022-10-09T13:19:15
| 2022-10-09T13:16:57
|
mariosasko
|
[] |
Allow connection objects in `from_sql` (emit a warning that they are cachable) and add a tip that explains the format of the con parameter when provided as a URI string.
PS: ~~This PR contains a parameter link, so https://github.com/huggingface/doc-builder/pull/311 needs to be merged before it's "ready for review".~~ Done!
| true
|
1,401,102,407
|
https://api.github.com/repos/huggingface/datasets/issues/5090
|
https://github.com/huggingface/datasets/issues/5090
| 5,090
|
Review sync issues from GitHub to Hub
|
closed
| 1
| 2022-10-07T12:31:56
| 2022-10-08T07:07:36
| 2022-10-08T07:07:36
|
albertvillanova
|
[
"bug"
] |
## Describe the bug
We have discovered that sometimes there were sync issues between GitHub and Hub datasets, after a merge commit to main branch.
For example:
- this merge commit: https://github.com/huggingface/datasets/commit/d74a9e8e4bfff1fed03a4cab99180a841d7caf4b
- was not properly synced with the Hub: https://github.com/huggingface/datasets/actions/runs/3002495269/jobs/4819769684
```
[main 9e641de] Add Papers with Code ID to scifact dataset (#4941)
Author: Albert Villanova del Moral <albertvillanova@users.noreply.huggingface.co>
1 file changed, 42 insertions(+), 14 deletions(-)
push failed !
GitCommandError(['git', 'push'], 1, b'remote: ---------------------------------------------------------- \nremote: Sorry, your push was rejected during YAML metadata verification: \nremote: - Error: "license" does not match any of the allowed types \nremote: ---------------------------------------------------------- \nremote: Please find the documentation at: \nremote: https://huggingface.co/docs/hub/models-cards#model-card-metadata \nremote: ---------------------------------------------------------- \nTo [https://huggingface.co/datasets/scifact.git\n](https://huggingface.co/datasets/scifact.git/n) ! [remote rejected] main -> main (pre-receive hook declined)\nerror: failed to push some refs to \'[https://huggingface.co/datasets/scifact.git\](https://huggingface.co/datasets/scifact.git/)'', b'')
```
We are reviewing sync issues in previous commits to recover them and repushing to the Hub.
TODO: Review
- [x] #4941
- scifact
- [x] #4931
- scifact
- [x] #4753
- wikipedia
- [x] #4554
- wmt17, wmt19, wmt_t2t
- Fixed with "Release 2.4.0" commit: https://github.com/huggingface/datasets/commit/401d4c4f9b9594cb6527c599c0e7a72ce1a0ea49
- https://huggingface.co/datasets/wmt17/commit/5c0afa83fbbd3508ff7627c07f1b27756d1379ea
- https://huggingface.co/datasets/wmt19/commit/b8ad5bf1960208a376a0ab20bc8eac9638f7b400
- https://huggingface.co/datasets/wmt_t2t/commit/b6d67191804dd0933476fede36754a436b48d1fc
- [x] #4607
- [x] #4416
- lccc
- Fixed with "Release 2.3.0" commit: https://huggingface.co/datasets/lccc/commit/8b1f8cf425b5653a0a4357a53205aac82ce038d1
- [x] #4367
| false
|
1,400,788,486
|
https://api.github.com/repos/huggingface/datasets/issues/5089
|
https://github.com/huggingface/datasets/issues/5089
| 5,089
|
Resume failed process
|
open
| 0
| 2022-10-07T08:07:03
| 2022-10-07T08:07:03
| null |
felix-schneider
|
[
"enhancement"
] |
**Is your feature request related to a problem? Please describe.**
When a process (`map`, `filter`, etc.) crashes part-way through, you lose all progress.
**Describe the solution you'd like**
It would be good if the cache reflected the partial progress, so that after we restart the script, the process can restart where it left off.
**Describe alternatives you've considered**
Doing processing outside of `datasets`, by writing the dataset to json files and building a restart mechanism myself.
**Additional context**
N/A
| false
|
1,400,530,412
|
https://api.github.com/repos/huggingface/datasets/issues/5088
|
https://github.com/huggingface/datasets/issues/5088
| 5,088
|
load_datasets("json", ...) don't read local .json.gz properly
|
open
| 2
| 2022-10-07T02:16:58
| 2022-10-07T14:43:16
| null |
junwang-wish
|
[
"bug"
] |
## Describe the bug
I have a local file `*.json.gz` and it can be read by `pandas.read_json(lines=True)`, but cannot be read by `load_datasets("json")` (resulting in 0 lines)
## Steps to reproduce the bug
```python
fpath = '/data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz'
ds_panda = DatasetDict(
test=Dataset.from_pandas(
pd.read_json(fpath, lines=True)
)
)
ds_direct = load_dataset(
'json', data_files={
'test': fpath
}, features=Features(
text_input=Value(dtype="string", id=None),
text_output=Value(dtype="string", id=None)
)
)
len(ds_panda['test']), len(ds_direct['test'])
```
## Expected results
Lines of `ds_panda['test']` and `ds_direct['test']` should match.
## Actual results
```
Using custom data configuration default-c0ef2598760968aa
Downloading and preparing dataset json/default to /data/junwang/.cache/huggingface/datasets/json/default-c0ef2598760968aa/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab...
Dataset json downloaded and prepared to /data/junwang/.cache/huggingface/datasets/json/default-c0ef2598760968aa/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab. Subsequent calls will reuse this data.
(62087, 0)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Ubuntu 18.04.4 LTS
- Python version: 3.8.13
- PyArrow version: 9.0.0
| false
|
1,400,487,967
|
https://api.github.com/repos/huggingface/datasets/issues/5087
|
https://github.com/huggingface/datasets/pull/5087
| 5,087
|
Fix filter with empty indices
|
closed
| 1
| 2022-10-07T01:07:00
| 2022-10-07T18:43:03
| 2022-10-07T18:40:26
|
Mouhanedg56
|
[] |
Fix #5085
| true
|
1,400,216,975
|
https://api.github.com/repos/huggingface/datasets/issues/5086
|
https://github.com/huggingface/datasets/issues/5086
| 5,086
|
HTTPError: 404 Client Error: Not Found for url
|
closed
| 3
| 2022-10-06T19:48:58
| 2022-10-07T15:12:01
| 2022-10-07T15:12:01
|
keyuchen21
|
[
"bug"
] |
## Describe the bug
I was following chap 5 from huggingface course: https://huggingface.co/course/chapter5/6?fw=tf
However, I'm not able to download the datasets, with a 404 erros
<img width="1160" alt="iShot2022-10-06_15 54 50" src="https://user-images.githubusercontent.com/54015474/194406327-ae62c2f3-1da5-4686-8631-13d879a0edee.png">
## Steps to reproduce the bug
```python
from huggingface_hub import hf_hub_url
data_files = hf_hub_url(
repo_id="lewtun/github-issues",
filename="datasets-issues-with-hf-doc-builder.jsonl",
repo_type="dataset",
)
from datasets import load_dataset
issues_dataset = load_dataset("json", data_files=data_files, split="train")
issues_dataset
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.2
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.12
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
| false
|
1,400,113,569
|
https://api.github.com/repos/huggingface/datasets/issues/5085
|
https://github.com/huggingface/datasets/issues/5085
| 5,085
|
Filtering on an empty dataset returns a corrupted dataset.
|
closed
| 3
| 2022-10-06T18:18:49
| 2022-10-07T19:06:02
| 2022-10-07T18:40:26
|
gabegma
|
[
"bug",
"hacktoberfest"
] |
## Describe the bug
When filtering a dataset twice, where the first result is an empty dataset, the second dataset seems corrupted.
## Steps to reproduce the bug
```python
datasets = load_dataset("glue", "sst2")
dataset_split = datasets['validation']
ds_filter_1 = dataset_split.filter(lambda x: False) # Some filtering condition that leads to an empty dataset
assert ds_filter_1.num_rows == 0
sentences = ds_filter_1['sentence']
assert len(sentences) == 0
ds_filter_2 = ds_filter_1.filter(lambda x: False) # Some other filtering condition
assert ds_filter_2.num_rows == 0
assert 'sentence' in ds_filter_2.column_names
sentences = ds_filter_2['sentence']
```
## Expected results
The last line should be returning an empty list, same as 4 lines above.
## Actual results
The last line currently raises `IndexError: index out of bounds`.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.2
- Platform: macOS-11.6.6-x86_64-i386-64bit
- Python version: 3.9.11
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| false
|
1,400,016,229
|
https://api.github.com/repos/huggingface/datasets/issues/5084
|
https://github.com/huggingface/datasets/pull/5084
| 5,084
|
IterableDataset formatting in numpy/torch/tf/jax
|
closed
| 3
| 2022-10-06T16:53:38
| 2023-09-24T10:06:51
| 2022-12-20T17:19:52
|
lhoestq
|
[] |
This code now returns a numpy array:
```python
from datasets import load_dataset
ds = load_dataset("imagenet-1k", split="train", streaming=True).with_format("np")
print(next(iter(ds))["image"])
```
It also works with "arrow", "pandas", "torch", "tf" and "jax"
### Implementation details:
I'm using the existing code to format an Arrow Table to the right output format for simplicity.
Therefore it's probbaly not the most optimized approach.
For example to output PyTorch tensors it does this for every example:
python data -> arrow table -> numpy extracted data -> pytorch formatted data
### Releasing this feature
Even though I consider this as a bug/inconsistency, this change is a breaking change.
And I'm sure some users were relying on the torch iterable dataset to return PIL Image and used data collators to convert to pytorch.
So I guess this is `datasets` 3.0 ?
### TODO
- [x] merge https://github.com/huggingface/datasets/pull/5072
- [ ] docs
- [ ] tests
Close https://github.com/huggingface/datasets/issues/5083
| true
|
1,399,842,514
|
https://api.github.com/repos/huggingface/datasets/issues/5083
|
https://github.com/huggingface/datasets/issues/5083
| 5,083
|
Support numpy/torch/tf/jax formatting for IterableDataset
|
closed
| 2
| 2022-10-06T15:14:58
| 2023-10-09T12:42:15
| 2023-10-09T12:42:15
|
lhoestq
|
[
"enhancement",
"streaming",
"good second issue"
] |
Right now `IterableDataset` doesn't do any formatting.
In particular this code should return a numpy array:
```python
from datasets import load_dataset
ds = load_dataset("imagenet-1k", split="train", streaming=True).with_format("np")
print(next(iter(ds))["image"])
```
Right now it returns a PIL.Image.
Setting `streaming=False` does return a numpy array after #5072
| false
|
1,399,379,777
|
https://api.github.com/repos/huggingface/datasets/issues/5082
|
https://github.com/huggingface/datasets/pull/5082
| 5,082
|
adding keep in memory
|
closed
| 2
| 2022-10-06T11:10:46
| 2022-10-07T14:35:34
| 2022-10-07T14:32:54
|
Mustapha-AJEGHRIR
|
[] |
Fixing #514 .
Hello @mariosasko 👋, I have implemented what you have recommanded to fix the keep in memory problem for shuffle on the issue #514 .
| true
|
1,399,340,050
|
https://api.github.com/repos/huggingface/datasets/issues/5081
|
https://github.com/huggingface/datasets/issues/5081
| 5,081
|
Bug loading `sentence-transformers/parallel-sentences`
|
open
| 8
| 2022-10-06T10:47:51
| 2022-10-11T10:00:48
| null |
PhilipMay
|
[
"bug"
] |
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("sentence-transformers/parallel-sentences")
```
raises this:
```
/home/phmay/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py:697: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols'
return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs)
/home/phmay/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py:697: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols'
return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [4], line 1
----> 1 dataset = load_dataset("sentence-transformers/parallel-sentences", split="train")
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/load.py:1693, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1690 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1692 # Download and prepare data
-> 1693 builder_instance.download_and_prepare(
1694 download_config=download_config,
1695 download_mode=download_mode,
1696 ignore_verifications=ignore_verifications,
1697 try_from_hf_gcs=try_from_hf_gcs,
1698 use_auth_token=use_auth_token,
1699 )
1701 # Build dataset for splits
1702 keep_in_memory = (
1703 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1704 )
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:807, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs)
801 if not downloaded_from_gcs:
802 prepare_split_kwargs = {
803 "file_format": file_format,
804 "max_shard_size": max_shard_size,
805 **download_and_prepare_kwargs,
806 }
--> 807 self._download_and_prepare(
808 dl_manager=dl_manager,
809 verify_infos=verify_infos,
810 **prepare_split_kwargs,
811 **download_and_prepare_kwargs,
812 )
813 # Sync info
814 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:898, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
894 split_dict.add(split_generator.split_info)
896 try:
897 # Prepare split will record examples associated to the split
--> 898 self._prepare_split(split_generator, **prepare_split_kwargs)
899 except OSError as e:
900 raise OSError(
901 "Cannot find data file. "
902 + (self.manual_download_instructions or "")
903 + "\nOriginal error:\n"
904 + str(e)
905 ) from None
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:1513, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, max_shard_size)
1506 shard_id += 1
1507 writer = writer_class(
1508 features=writer._features,
1509 path=fpath.replace("SSSSS", f"{shard_id:05d}"),
1510 storage_options=self._fs.storage_options,
1511 embed_local_files=embed_local_files,
1512 )
-> 1513 writer.write_table(table)
1514 finally:
1515 num_shards = shard_id + 1
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/arrow_writer.py:540, in ArrowWriter.write_table(self, pa_table, writer_batch_size)
538 if self.pa_writer is None:
539 self._build_writer(inferred_schema=pa_table.schema)
--> 540 pa_table = table_cast(pa_table, self._schema)
541 if self.embed_local_files:
542 pa_table = embed_table_storage(pa_table)
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/table.py:2044, in table_cast(table, schema)
2032 """Improved version of pa.Table.cast.
2033
2034 It supports casting to feature types stored in the schema metadata.
(...)
2041 table (:obj:`pyarrow.Table`): the casted table
2042 """
2043 if table.schema != schema:
-> 2044 return cast_table_to_schema(table, schema)
2045 elif table.schema.metadata != schema.metadata:
2046 return table.replace_schema_metadata(schema.metadata)
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/table.py:2005, in cast_table_to_schema(table, schema)
2003 features = Features.from_arrow_schema(schema)
2004 if sorted(table.column_names) != sorted(features):
-> 2005 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
2006 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
2007 return pa.Table.from_arrays(arrays, schema=schema)
ValueError: Couldn't cast
Action taken on Parliament's resolutions: see Minutes: string
Následný postup na základě usnesení Parlamentu: viz zápis: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 742
to
{'Membership of Parliament: see Minutes': Value(dtype='string', id=None), 'Състав на Парламента: вж. протоколи': Value(dtype='string', id=None)}
because column names don't match
```
## Expected results
no error
## Actual results
error
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux
- Python version: Python 3.9.13
- PyArrow version: pyarrow 9.0.0
- transformers 4.22.2
- datasets 2.5.2
| false
|
1,398,849,565
|
https://api.github.com/repos/huggingface/datasets/issues/5080
|
https://github.com/huggingface/datasets/issues/5080
| 5,080
|
Use hfh for caching
|
open
| 1
| 2022-10-06T05:51:58
| 2022-10-06T14:26:05
| null |
albertvillanova
|
[
"enhancement"
] |
## Is your feature request related to a problem?
As previously discussed in our meeting with @Wauplin and agreed on our last datasets team sync meeting, I'm investigating how `datasets` can use `hfh` for caching.
## Describe the solution you'd like
Due to the peculiarities of the `datasets` cache, I would propose adopting `hfh` caching system in stages.
First, we could easily start using `hfh` caching for:
- dataset Python scripts
- dataset READMEs
- dataset infos JSON files (now deprecated)
Second, we could also use `hfh` caching for data files downloaded from the Hub.
Further investigation is needed for:
- files downloaded from non-Hub hosts
- extracted files from downloaded archive/compressed files
- generated Arrow files
## Additional context
Docs about the `hfh` caching system:
- [Manage huggingface_hub cache-system](https://huggingface.co/docs/huggingface_hub/main/en/how-to-cache)
- [Cache-system reference](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/cache)
The `transformers` library has already adopted `hfh` for caching. See:
- huggingface/transformers#18438
- huggingface/transformers#18857
- huggingface/transformers#18966
| false
|
1,398,609,305
|
https://api.github.com/repos/huggingface/datasets/issues/5079
|
https://github.com/huggingface/datasets/pull/5079
| 5,079
|
refactor: replace AssertionError with more meaningful exceptions (#5074)
|
closed
| 1
| 2022-10-06T01:39:35
| 2022-10-07T14:35:43
| 2022-10-07T14:33:10
|
galbwe
|
[] |
Closes #5074
Replaces `AssertionError` in the following files with more descriptive exceptions:
- `src/datasets/arrow_reader.py`
- `src/datasets/builder.py`
- `src/datasets/utils/version.py`
The issue listed more files that needed to be fixed, but the rest of them were contained in the top-level `datasets` directory, which was removed when #4974 was merged
| true
|
1,398,335,148
|
https://api.github.com/repos/huggingface/datasets/issues/5078
|
https://github.com/huggingface/datasets/pull/5078
| 5,078
|
Fix header level in Audio docs
|
closed
| 1
| 2022-10-05T20:22:44
| 2022-10-06T08:12:23
| 2022-10-06T08:09:41
|
stevhliu
|
[] |
Fixes header level so `Dataset features` is the doc title instead of `The Audio type`:

| true
|
1,398,080,859
|
https://api.github.com/repos/huggingface/datasets/issues/5077
|
https://github.com/huggingface/datasets/pull/5077
| 5,077
|
Fix passed download_config in HubDatasetModuleFactoryWithoutScript
|
closed
| 1
| 2022-10-05T16:42:36
| 2022-10-06T05:31:22
| 2022-10-06T05:29:06
|
albertvillanova
|
[] |
Fix passed `download_config` in `HubDatasetModuleFactoryWithoutScript`.
| true
|
1,397,918,092
|
https://api.github.com/repos/huggingface/datasets/issues/5076
|
https://github.com/huggingface/datasets/pull/5076
| 5,076
|
fix: update exception throw from OSError to EnvironmentError in `push…
|
closed
| 1
| 2022-10-05T14:46:29
| 2022-10-07T14:35:57
| 2022-10-07T14:33:27
|
rahulXs
|
[] |
Status:
Ready for review
Description of Changes:
Fixes #5075
Changes proposed in this pull request:
- Throw EnvironmentError instead of OSError in `push_to_hub` when the Hub token is not present.
| true
|
1,397,865,501
|
https://api.github.com/repos/huggingface/datasets/issues/5075
|
https://github.com/huggingface/datasets/issues/5075
| 5,075
|
Throw EnvironmentError when token is not present
|
closed
| 1
| 2022-10-05T14:14:18
| 2022-10-07T14:33:28
| 2022-10-07T14:33:28
|
mariosasko
|
[
"good first issue",
"hacktoberfest"
] |
Throw EnvironmentError instead of OSError ([link](https://github.com/huggingface/datasets/blob/6ad430ba0cdeeb601170f732d4bd977f5c04594d/src/datasets/arrow_dataset.py#L4306) to the line) in `push_to_hub` when the Hub token is not present.
| false
|
1,397,850,352
|
https://api.github.com/repos/huggingface/datasets/issues/5074
|
https://github.com/huggingface/datasets/issues/5074
| 5,074
|
Replace AssertionErrors with more meaningful errors
|
closed
| 3
| 2022-10-05T14:03:55
| 2022-10-07T14:33:11
| 2022-10-07T14:33:11
|
mariosasko
|
[
"good first issue",
"hacktoberfest"
] |
Replace the AssertionErrors with more meaningful errors such as ValueError, TypeError, etc.
The files with AssertionErrors that need to be replaced:
```
src/datasets/arrow_reader.py
src/datasets/builder.py
src/datasets/utils/version.py
```
| false
|
1,397,832,183
|
https://api.github.com/repos/huggingface/datasets/issues/5073
|
https://github.com/huggingface/datasets/pull/5073
| 5,073
|
Restore saved format state in `load_from_disk`
|
closed
| 1
| 2022-10-05T13:51:47
| 2022-10-11T16:55:07
| 2022-10-11T16:49:23
|
asofiaoliveira
|
[] |
Hello! @mariosasko
This pull request relates to issue #5050 and intends to add the format to datasets loaded from disk.
All I did was add a set_format in the Dataset.load_from_disk, as DatasetDict.load_from_disk relies on the first.
I don't know if I should add a test and where, so let me know if I should and I can work on that as well!
| true
|
1,397,765,531
|
https://api.github.com/repos/huggingface/datasets/issues/5072
|
https://github.com/huggingface/datasets/pull/5072
| 5,072
|
Image & Audio formatting for numpy/torch/tf/jax
|
closed
| 3
| 2022-10-05T13:07:03
| 2022-10-10T13:24:10
| 2022-10-10T13:21:32
|
lhoestq
|
[] |
Added support for image and audio formatting for numpy, torch, tf and jax.
For images, the dtype used is the one of the image (the one returned by PIL.Image), e.g. uint8
I also added support for string, binary and None types. In particular for torch and jax, strings are kept unchanged (previously it was returning an error because you can't create a tensor of strings)
| true
|
1,397,301,270
|
https://api.github.com/repos/huggingface/datasets/issues/5071
|
https://github.com/huggingface/datasets/pull/5071
| 5,071
|
Support DEFAULT_CONFIG_NAME when no BUILDER_CONFIGS
|
closed
| 2
| 2022-10-05T06:28:39
| 2022-10-06T14:43:12
| 2022-10-06T14:40:26
|
albertvillanova
|
[] |
This PR supports defining a default config name, even if no predefined allowed config names are set.
Fix #5070.
CC: @stas00
| true
|
1,396,765,647
|
https://api.github.com/repos/huggingface/datasets/issues/5070
|
https://github.com/huggingface/datasets/issues/5070
| 5,070
|
Support default config name when no builder configs
|
closed
| 1
| 2022-10-04T19:49:35
| 2022-10-06T14:40:26
| 2022-10-06T14:40:26
|
albertvillanova
|
[
"enhancement"
] |
**Is your feature request related to a problem? Please describe.**
As discussed with @stas00, we could support defining a default config name, even if no predefined allowed config names are set. That is, support `DEFAULT_CONFIG_NAME`, even when `BUILDER_CONFIGS` is not defined.
**Additional context**
In order to support creating configs on the fly **by name** (not using kwargs), the list of allowed builder configs `BUILDER_CONFIGS` must not be set.
However, if so, then `DEFAULT_CONFIG_NAME` is not supported.
| false
|
1,396,361,768
|
https://api.github.com/repos/huggingface/datasets/issues/5067
|
https://github.com/huggingface/datasets/pull/5067
| 5,067
|
Fix CONTRIBUTING once dataset scripts transferred to Hub
|
closed
| 1
| 2022-10-04T14:16:05
| 2022-10-06T06:14:43
| 2022-10-06T06:12:12
|
albertvillanova
|
[] |
This PR updates the `CONTRIBUTING.md` guide, once the all dataset scripts have been removed from the GitHub repo and transferred to the HF Hub:
- #4974
See diff here: https://github.com/huggingface/datasets/commit/e3291ecff9e54f09fcee3f313f051a03fdc3d94b
Additionally, this PR fixes the line separator that by some previous mistake was CRLF instead of LF.
| true
|
1,396,086,745
|
https://api.github.com/repos/huggingface/datasets/issues/5066
|
https://github.com/huggingface/datasets/pull/5066
| 5,066
|
Support streaming gzip.open
|
closed
| 1
| 2022-10-04T11:20:05
| 2022-10-06T15:13:51
| 2022-10-06T15:11:29
|
albertvillanova
|
[] |
This PR implements support for streaming out-of-the-box dataset scripts containing `gzip.open`.
This has been a recurring issue. See, e.g.:
- #5060
- #3191
| true
|
1,396,003,362
|
https://api.github.com/repos/huggingface/datasets/issues/5065
|
https://github.com/huggingface/datasets/pull/5065
| 5,065
|
Ci py3.10
|
closed
| 2
| 2022-10-04T10:13:51
| 2022-11-29T15:28:05
| 2022-11-29T15:25:26
|
lhoestq
|
[] |
Added a CI job for python 3.10
Some dependencies don't work on 3.10 like apache beam, so I remove them from the extras in this case.
I also removed some s3 fixtures that we don't use anymore (and that don't work on 3.10 anyway)
| true
|
1,395,978,143
|
https://api.github.com/repos/huggingface/datasets/issues/5064
|
https://github.com/huggingface/datasets/pull/5064
| 5,064
|
Align signature of create/delete_repo with latest hfh
|
closed
| 1
| 2022-10-04T09:54:53
| 2022-10-07T17:02:11
| 2022-10-07T16:59:30
|
albertvillanova
|
[] |
This PR aligns the signature of `create_repo`/`delete_repo` with the current one in hfh, by removing deprecated `name` and `organization`, and using `repo_id` instead.
Related to:
- #5063
CC: @lhoestq
| true
|
1,395,895,463
|
https://api.github.com/repos/huggingface/datasets/issues/5063
|
https://github.com/huggingface/datasets/pull/5063
| 5,063
|
Align signature of list_repo_files with latest hfh
|
closed
| 1
| 2022-10-04T08:51:46
| 2022-10-07T16:42:57
| 2022-10-07T16:40:16
|
albertvillanova
|
[] |
This PR aligns the signature of `list_repo_files` with the current one in `hfh`, by renaming deprecated `token` to `use_auth_token`.
This is already the case for `dataset_info`.
CC: @lhoestq
| true
|
1,395,739,417
|
https://api.github.com/repos/huggingface/datasets/issues/5062
|
https://github.com/huggingface/datasets/pull/5062
| 5,062
|
Fix CI hfh token warning
|
closed
| 2
| 2022-10-04T06:36:54
| 2022-10-04T08:58:15
| 2022-10-04T08:42:31
|
albertvillanova
|
[] |
In our CI, we get warnings from `hfh` about using deprecated `token`: https://github.com/huggingface/datasets/actions/runs/3174626525/jobs/5171672431
```
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_private
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files_with_max_shard_size
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_overwrite_files
C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\huggingface_hub\utils\_deprecation.py:97: FutureWarning: Deprecated argument(s) used in 'dataset_info': token. Will not be supported from version '0.12'.
warnings.warn(message, FutureWarning)
```
This PR fixes the tests in `TestPushToHub` so that we fix these warnings.
Continuation of:
- #5031
CC: @lhoestq
| true
|
1,395,476,770
|
https://api.github.com/repos/huggingface/datasets/issues/5061
|
https://github.com/huggingface/datasets/issues/5061
| 5,061
|
`_pickle.PicklingError: logger cannot be pickled` in multiprocessing `map`
|
closed
| 6
| 2022-10-03T23:51:38
| 2023-07-21T14:43:35
| 2023-07-21T14:43:34
|
ZhaofengWu
|
[
"bug"
] |
## Describe the bug
When I `map` with multiple processes, this error occurs. The `.name` of the `logger` that fails to pickle in the final line is `datasets.fingerprint`.
```
File "~/project/dataset.py", line 204, in <dictcomp>
split: dataset.map(
File ".../site-packages/datasets/arrow_dataset.py", line 2489, in map
transformed_shards[index] = async_result.get()
File ".../site-packages/multiprocess/pool.py", line 771, in get
raise self._value
File ".../site-packages/multiprocess/pool.py", line 537, in _handle_tasks
put(task)
File ".../site-packages/multiprocess/connection.py", line 214, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File ".../site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File ".../site-packages/dill/_dill.py", line 620, in dump
StockPickler.dump(self, obj)
File ".../pickle.py", line 487, in dump
self.save(obj)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../pickle.py", line 902, in save_tuple
save(element)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1963, in save_function
_save_with_postproc(pickler, (_create_function, (
File ".../site-packages/dill/_dill.py", line 1140, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File ".../pickle.py", line 717, in save_reduce
save(state)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../pickle.py", line 887, in save_tuple
save(element)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1251, in save_module_dict
StockPickler.save_dict(pickler, obj)
File ".../pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File ".../pickle.py", line 998, in _batch_setitems
save(v)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1963, in save_function
_save_with_postproc(pickler, (_create_function, (
File ".../site-packages/dill/_dill.py", line 1140, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File ".../pickle.py", line 717, in save_reduce
save(state)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../pickle.py", line 887, in save_tuple
save(element)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1251, in save_module_dict
StockPickler.save_dict(pickler, obj)
File ".../pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File ".../pickle.py", line 998, in _batch_setitems
save(v)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1963, in save_function
_save_with_postproc(pickler, (_create_function, (
File ".../site-packages/dill/_dill.py", line 1154, in _save_with_postproc
pickler._batch_setitems(iter(source.items()))
File ".../pickle.py", line 998, in _batch_setitems
save(v)
File ".../pickle.py", line 578, in save
rv = reduce(self.proto)
File ".../logging/__init__.py", line 1774, in __reduce__
raise pickle.PicklingError('logger cannot be pickled')
_pickle.PicklingError: logger cannot be pickled
```
## Steps to reproduce the bug
Sorry I failed to have a minimal reproducible example, but the offending line on my end is
```python
dataset.map(
lambda examples: self.tokenize(examples), # this doesn't matter, lambda e: [1] * len(...) also breaks. In fact I'm pretty sure it breaks before executing this lambda
batched=True,
num_proc=4,
)
```
This does work when `num_proc=1`, so it's likely a multiprocessing thing.
## Expected results
`map` succeeds
## Actual results
The error trace above.
## Environment info
- `datasets` version: 1.16.1 and 2.5.1 both failed
- Platform: Ubuntu 20.04.4 LTS
- Python version: 3.10.4
- PyArrow version: 9.0.0
| false
|
1,395,382,940
|
https://api.github.com/repos/huggingface/datasets/issues/5060
|
https://github.com/huggingface/datasets/issues/5060
| 5,060
|
Unable to Use Custom Dataset Locally
|
closed
| 4
| 2022-10-03T21:55:16
| 2022-10-06T14:29:18
| 2022-10-06T14:29:17
|
zanussbaum
|
[
"bug"
] |
## Describe the bug
I have uploaded a [dataset](https://huggingface.co/datasets/zpn/pubchem_selfies) and followed the instructions from the [dataset_loader](https://huggingface.co/docs/datasets/dataset_script#download-data-files-and-organize-splits) tutorial. In that tutorial, it says
```
If the data files live in the same folder or repository of the dataset script,
you can just pass the relative paths to the files instead of URLs.
```
Accordingly, I put the [relative path](https://huggingface.co/datasets/zpn/pubchem_selfies/blob/main/pubchem_selfies.py#L76) to the data to be used. I was able to test the dataset and generate the metadata locally with `datasets-cli test path/to/<your-dataset-loading-script> --save_infos --all_configs`
However, if I try to load the data using `load_dataset`, I get the following error
```
with gzip.open(filepath, mode="rt") as f:
File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 58, in open
binary_file = GzipFile(filename, gz_mode, compresslevel)
File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 173, in __init__
fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co/datasets/zpn/pubchem_selfies/resolve/main/data/Compound_021000001_021500000/Compound_021000001_021500000_SELFIES.jsonl.gz'
```
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("zpn/pubchem_selfies", streaming=True)
>>> t = dataset["train"]
>>> for item in t:
...... print(item)
...... break
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 723, in __iter__
for key, example in self._iter():
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 713, in _iter
yield from ex_iterable
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/Users/zachnussbaum/.cache/huggingface/modules/datasets_modules/datasets/zpn--pubchem_selfies/d2571f35996765aea70fd3f3f8e3882d59c401fb738615c79282e2eb1d9f7a25/pubchem_selfies.py", line 475, in _generate_examples
with gzip.open(filepath, mode="rt") as f:
File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 58, in open
binary_file = GzipFile(filename, gz_mode, compresslevel)
File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 173, in __init__
fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co/datasets/zpn/pubchem_selfies/resolve/main/data/Compound_021000001_021500000/Compound_021000001_021500000_SELFIES.jsonl.gz'
````
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.1
- Platform: macOS-12.5.1-x86_64-i386-64bit
- Python version: 3.9.7
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| false
|
1,395,050,876
|
https://api.github.com/repos/huggingface/datasets/issues/5059
|
https://github.com/huggingface/datasets/pull/5059
| 5,059
|
Fix typo
|
closed
| 1
| 2022-10-03T17:05:25
| 2022-10-03T17:34:40
| 2022-10-03T17:32:27
|
stevhliu
|
[] |
Fixes a small typo :)
| true
|
1,394,962,424
|
https://api.github.com/repos/huggingface/datasets/issues/5058
|
https://github.com/huggingface/datasets/pull/5058
| 5,058
|
Mark CI tests as xfail when 502 error
|
closed
| 1
| 2022-10-03T15:53:55
| 2022-10-04T10:03:23
| 2022-10-04T10:01:23
|
albertvillanova
|
[] |
To make CI more robust, we could mark as xfail when the Hub raises a 502 error (besides 500 error):
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_skip_identical_files
- https://github.com/huggingface/datasets/actions/runs/3174626525/jobs/5171672431
```
> raise HTTPError(http_error_msg, response=self)
E requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test-16648055339047.git/info/lfs/objects/batch
```
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_overwrite_files
- https://github.com/huggingface/datasets/actions/runs/3145587033/jobs/5113074889
```
> raise HTTPError(http_error_msg, response=self)
E requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test-16643866807322.git/info/lfs/objects/verify
```
Currently, we mark as xfail when 500 error:
- #4845
| true
|
1,394,827,216
|
https://api.github.com/repos/huggingface/datasets/issues/5057
|
https://github.com/huggingface/datasets/pull/5057
| 5,057
|
Support `converters` in `CsvBuilder`
|
closed
| 1
| 2022-10-03T14:23:21
| 2022-10-04T11:19:28
| 2022-10-04T11:17:32
|
mariosasko
|
[] |
Add the `converters` param to `CsvBuilder`, to help in situations like [this one](https://discuss.huggingface.co/t/typeerror-in-load-dataset-related-to-a-sequence-of-strings/23545).
| true
|
1,394,713,173
|
https://api.github.com/repos/huggingface/datasets/issues/5056
|
https://github.com/huggingface/datasets/pull/5056
| 5,056
|
Fix broken URL's (GEM)
|
closed
| 2
| 2022-10-03T13:13:22
| 2022-10-04T13:49:00
| 2022-10-04T13:48:59
|
manandey
|
[] |
This PR fixes the broken URL's in GEM. cc. @lhoestq, @albertvillanova
| true
|
1,394,503,844
|
https://api.github.com/repos/huggingface/datasets/issues/5055
|
https://github.com/huggingface/datasets/pull/5055
| 5,055
|
Fix backward compatibility for dataset_infos.json
|
closed
| 1
| 2022-10-03T10:30:14
| 2022-10-03T13:43:55
| 2022-10-03T13:41:32
|
lhoestq
|
[] |
While working on https://github.com/huggingface/datasets/pull/5018 I noticed a small bug introduced in #4926 regarding backward compatibility for dataset_infos.json
Indeed, when a dataset repo had both dataset_infos.json and README.md, the JSON file was ignored. This is unexpected: in practice it should be ignored only if the README.md has a dataset_info field, which has precedence over the data in the JSON file.
| true
|
1,394,152,728
|
https://api.github.com/repos/huggingface/datasets/issues/5054
|
https://github.com/huggingface/datasets/pull/5054
| 5,054
|
Fix license/citation information of squadshifts dataset card
|
closed
| 1
| 2022-10-03T05:19:13
| 2022-10-03T09:26:49
| 2022-10-03T09:24:30
|
albertvillanova
|
[
"dataset contribution"
] |
This PR fixes the license/citation information of squadshifts dataset card, once the dataset owners have responded to our request for information:
- https://github.com/modestyachts/squadshifts-website/issues/1
Additionally, we have updated the mention in their website to our `datasets` library (they were referring old name `nlp`):
- https://github.com/modestyachts/squadshifts-website/pull/2#event-7500953009
| true
|
1,393,739,882
|
https://api.github.com/repos/huggingface/datasets/issues/5053
|
https://github.com/huggingface/datasets/issues/5053
| 5,053
|
Intermittent JSON parse error when streaming the Pile
|
open
| 3
| 2022-10-02T11:56:46
| 2022-10-04T17:59:03
| null |
neelnanda-io
|
[
"bug"
] |
## Describe the bug
I have an intermittent error when streaming the Pile, where I get a JSON parse error which causes my program to crash.
This is intermittent - when I rerun the program with the same random seed it does not crash in the same way. The exact point this happens also varied - it happened to me 11B tokens and 4 days into a training run, and now just happened 2 minutes into one, but I can't reliably reproduce it.
I'm using a remote machine with 8 A6000 GPUs via runpod.io
## Expected results
I have a DataLoader which can iterate through the whole Pile
## Actual results
Stack trace:
```
Failed to read file 'zstd://12.jsonl::https://the-eye.eu/public/AI/pile/train/12.jsonl.zst' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Invalid value. in row 0
```
I'm currently using HuggingFace accelerate, which also gave me the following stack trace, but I've also experienced this problem intermittently when using DataParallel, so I don't think it's to do with parallelisation
```
Traceback (most recent call last):
File "ddp_script.py", line 1258, in <module>
main()
File "ddp_script.py", line 1143, in main
for c, batch in tqdm.tqdm(enumerate(data_iter)):
File "/opt/conda/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 503, in __iter__
next_batch, next_batch_info, next_skip = self._fetch_batches(main_iterator)
File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 454, in _fetch_batches
broadcast_object_list(batch_info)
File "/opt/conda/lib/python3.7/site-packages/accelerate/utils/operations.py", line 333, in broadcast_object_list
torch.distributed.broadcast_object_list(object_list, src=from_process)
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1900, in broadcast_object_list
object_list[i] = _tensor_to_object(obj_view, obj_size)
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1571, in _tensor_to_object
return _unpickler(io.BytesIO(buf)).load()
_pickle.UnpicklingError: invalid load key, '@'.
```
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset(
cfg["dataset_name"], streaming=True, split="train")
dataset = dataset.remove_columns("meta")
dataset = dataset.map(tokenize_and_concatenate, batched=True)
dataset = dataset.with_format(type="torch")
train_data_loader = DataLoader(
dataset, batch_size=cfg["batch_size"], num_workers=3)
for batch in train_data_loader:
continue
```
`tokenize_and_concatenate` is a custom tokenization function I defined on the GPT-NeoX tokenizer to tokenize the text, separated by endoftext tokens, and reshape to have length batch_size, I don't think this is related to tokenization:
```
import numpy as np
import einops
import torch
def tokenize_and_concatenate(examples):
texts = examples["text"]
full_text = tokenizer.eos_token.join(texts)
div = 20
length = len(full_text) // div
text_list = [full_text[i * length: (i + 1) * length]
for i in range(div)]
tokens = tokenizer(text_list, return_tensors="np", padding=True)[
"input_ids"
].flatten()
tokens = tokens[tokens != tokenizer.pad_token_id]
n = len(tokens)
curr_batch_size = n // (seq_len - 1)
tokens = tokens[: (seq_len - 1) * curr_batch_size]
tokens = einops.rearrange(
tokens,
"(batch_size seq) -> batch_size seq",
batch_size=curr_batch_size,
seq=seq_len - 1,
)
prefix = np.ones((curr_batch_size, 1), dtype=np.int64) * \
tokenizer.bos_token_id
return {
"text": np.concatenate([prefix, tokens], axis=1)
}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-5.4.0-105-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.13
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
ZStandard data:
Version: 0.18.0
Summary: Zstandard bindings for Python
Home-page: https://github.com/indygreg/python-zstandard
Author: Gregory Szorc
Author-email: gregory.szorc@gmail.com
License: BSD
Location: /opt/conda/lib/python3.7/site-packages
Requires:
Required-by:
| false
|
1,393,076,765
|
https://api.github.com/repos/huggingface/datasets/issues/5052
|
https://github.com/huggingface/datasets/pull/5052
| 5,052
|
added from_generator method to IterableDataset class.
|
closed
| 3
| 2022-09-30T22:14:05
| 2022-10-05T12:51:48
| 2022-10-05T12:10:48
|
hamid-vakilzadeh
|
[] |
Hello,
This resolves issues #4988.
I added a method `from_generator` to class `IterableDataset`.
I modified the `read` method of input stream generator to also return Iterable_dataset.
| true
|
1,392,559,503
|
https://api.github.com/repos/huggingface/datasets/issues/5051
|
https://github.com/huggingface/datasets/pull/5051
| 5,051
|
Revert task removal in folder-based builders
|
closed
| 1
| 2022-09-30T14:50:03
| 2022-10-03T12:23:35
| 2022-10-03T12:21:31
|
mariosasko
|
[] |
Reverts the removal of `task_templates` in the folder-based builders. I also added the `AudioClassifaction` task for consistency.
This is needed to fix https://github.com/huggingface/transformers/issues/19177.
I think we should soon deprecate and remove the current task API (and investigate if it's possible to integrate the `train eval index` API), but we need to update the Transformers examples before that so we don't break them.
cc @NielsRogge
| true
|
1,392,381,882
|
https://api.github.com/repos/huggingface/datasets/issues/5050
|
https://github.com/huggingface/datasets/issues/5050
| 5,050
|
Restore saved format state in `load_from_disk`
|
closed
| 2
| 2022-09-30T12:40:07
| 2022-10-11T16:49:24
| 2022-10-11T16:49:24
|
mariosasko
|
[
"bug",
"good first issue"
] |
Even though we save the `format` state in `save_to_disk`, we don't restore it in `load_from_disk`. We should fix that.
Reported here: https://discuss.huggingface.co/t/save-to-disk-loses-formatting-information/23815
| false
|
1,392,361,381
|
https://api.github.com/repos/huggingface/datasets/issues/5049
|
https://github.com/huggingface/datasets/pull/5049
| 5,049
|
Add `kwargs` to `Dataset.from_generator`
|
closed
| 1
| 2022-09-30T12:24:27
| 2022-10-03T11:00:11
| 2022-10-03T10:58:15
|
mariosasko
|
[] |
Add the `kwargs` param to `from_generator` to align it with the rest of the `from_` methods (this param allows passing custom `writer_batch_size` for instance).
| true
|
1,392,170,680
|
https://api.github.com/repos/huggingface/datasets/issues/5048
|
https://github.com/huggingface/datasets/pull/5048
| 5,048
|
Fix bug with labels of eurlex config of lex_glue dataset
|
closed
| 4
| 2022-09-30T09:47:12
| 2022-09-30T16:30:25
| 2022-09-30T16:21:41
|
iliaschalkidis
|
[
"dataset contribution"
] |
Fix for a critical bug in the EURLEX dataset label list to make LexGLUE EURLEX results replicable.
In LexGLUE (Chalkidis et al., 2022), the following is mentioned w.r.t. EUR-LEX: _"It supports four different label granularities, comprising 21, 127, 567, 7390 EuroVoc concepts, respectively. We use the 100 most frequent concepts from level 2 [...]”._ The current label list has all 127 labels, which leads to different (lower) results, as communicated by users.
Thanks!
| true
|
1,392,088,398
|
https://api.github.com/repos/huggingface/datasets/issues/5047
|
https://github.com/huggingface/datasets/pull/5047
| 5,047
|
Fix cats_vs_dogs
|
closed
| 1
| 2022-09-30T08:47:29
| 2022-09-30T10:23:22
| 2022-09-30T09:34:28
|
lhoestq
|
[
"dataset contribution"
] |
Reported in https://github.com/huggingface/datasets/pull/3878
I updated the number of examples
| true
|
1,391,372,519
|
https://api.github.com/repos/huggingface/datasets/issues/5046
|
https://github.com/huggingface/datasets/issues/5046
| 5,046
|
Audiofolder creates empty Dataset if files same level as metadata
|
closed
| 5
| 2022-09-29T19:17:23
| 2022-10-28T13:05:07
| 2022-10-28T13:05:07
|
msis
|
[
"bug",
"good first issue",
"hacktoberfest"
] |
## Describe the bug
When audio files are at the same level as the metadata (`metadata.csv` or `metadata.jsonl` ), the `load_dataset` returns a `DatasetDict` with no rows but the correct columns.
https://github.com/huggingface/datasets/blob/1ea4d091b7a4b83a85b2eeb8df65115d39af3766/docs/source/audio_dataset.mdx?plain=1#L88
## Steps to reproduce the bug
`metadata.csv`:
```csv
file_name,duration,transcription
./2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav,10.768,hello
```
```python
>>> audio_dataset = load_dataset("audiofolder", data_dir="/audio-data/")
>>> audio_dataset
DatasetDict({
train: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
validation: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
})
```
I've tried, with no success,:
- setting `split` to something else so I don't get a `DatasetDict`,
- removing the `./`,
- using `.jsonl`.
## Expected results
```
Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 1
})
```
## Actual results
```
DatasetDict({
train: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
validation: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
})
```
## Environment info
- `datasets` version: 2.5.1
- Platform: Linux-5.13.0-1025-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| false
|
1,391,287,609
|
https://api.github.com/repos/huggingface/datasets/issues/5045
|
https://github.com/huggingface/datasets/issues/5045
| 5,045
|
Automatically revert to last successful commit to hub when a push_to_hub is interrupted
|
closed
| 5
| 2022-09-29T18:08:12
| 2023-10-16T13:30:49
| 2023-10-16T13:30:49
|
jorahn
|
[
"enhancement"
] |
**Is your feature request related to a problem? Please describe.**
I pushed a modification of a large dataset (remove a column) to the hub. The push was interrupted after some files were committed to the repo. This left the dataset to raise an error on load_dataset() (ValueError couldn’t cast … because column names don’t match). Only by specifying the previous (complete) commit as revision=commit_hash in load_data(), I was able to repair this and after a successful, complete push, the dataset loads without error again.
**Describe the solution you'd like**
Would it make sense to detect an incomplete push_to_hub() and automatically revert to the previous commit/revision?
**Describe alternatives you've considered**
Leave everything as is, the revision parameter in load_dataset() allows to manually fix this problem.
**Additional context**
Provide useful defaults
| false
|
1,391,242,908
|
https://api.github.com/repos/huggingface/datasets/issues/5044
|
https://github.com/huggingface/datasets/issues/5044
| 5,044
|
integrate `load_from_disk` into `load_dataset`
|
open
| 15
| 2022-09-29T17:37:12
| 2025-06-28T09:00:44
| null |
stas00
|
[
"enhancement"
] |
**Is your feature request related to a problem? Please describe.**
Is it possible to make `load_dataset` more universal similar to `from_pretrained` in `transformers` so that it can handle the hub, and the local path datasets of all supported types?
Currently one has to choose a different loader depending on how the dataset has been created.
e.g. this won't work:
```
$ git clone https://huggingface.co/datasets/severo/test-parquet
$ python -c 'from datasets import load_dataset; ds=load_dataset("test-parquet"); \
ds.save_to_disk("my_dataset"); load_dataset("my_dataset")'
[...]
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/load.py", line 1746, in load_dataset
builder_instance.download_and_prepare(
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 1277, in _prepare_split
writer.write_table(table)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/arrow_writer.py", line 524, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/table.py", line 2005, in table_cast
return cast_table_to_schema(table, schema)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/table.py", line 1968, in cast_table_to_schema
raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
```
both times the dataset is being loaded from disk. Why does it fail the second time?
Why can't `save_to_disk` generate a dataset that can be immediately loaded by `load_dataset`?
e.g. the simplest hack would be to have `save_to_disk` add some flag to the saved dataset, that tells `load_dataset` to internally call `load_from_disk`. like having `save_to_disk` create a `load_me_with_load_from_disk.txt` file ;) and `load_dataset` will support that feature from saved datasets from new `datasets` versions. The old ones will still need to use `load_from_disk` explicitly. Unless the flag is not needed and one can immediately tell by looking at the saved dataset that it was saved via `save_to_disk` and thus use `load_from_disk` internally.
The use-case is defining a simple API where the user only ever needs to pass a `dataset_name_or_path` and it will always just work. Currently one needs to manually add additional switches telling the system whether to use one loading method or the other which works but it's not smooth.
Thank you!
| false
|
1,391,141,773
|
https://api.github.com/repos/huggingface/datasets/issues/5043
|
https://github.com/huggingface/datasets/pull/5043
| 5,043
|
Fix `flatten_indices` with empty indices mapping
|
closed
| 1
| 2022-09-29T16:17:28
| 2022-09-30T15:46:39
| 2022-09-30T15:44:25
|
mariosasko
|
[] |
Fix #5038
| true
|
1,390,762,877
|
https://api.github.com/repos/huggingface/datasets/issues/5042
|
https://github.com/huggingface/datasets/pull/5042
| 5,042
|
Update swiss judgment prediction
|
closed
| 1
| 2022-09-29T12:10:02
| 2022-09-30T07:14:00
| 2022-09-29T14:32:02
|
JoelNiklaus
|
[
"dataset contribution"
] |
I forgot to add the new citation.
| true
|
1,390,722,230
|
https://api.github.com/repos/huggingface/datasets/issues/5041
|
https://github.com/huggingface/datasets/pull/5041
| 5,041
|
Support streaming hendrycks_test dataset.
|
closed
| 1
| 2022-09-29T11:37:58
| 2022-09-30T07:13:38
| 2022-09-29T12:07:29
|
albertvillanova
|
[
"dataset contribution"
] |
This PR:
- supports streaming
- fixes the description section of the dataset card
| true
|
1,390,566,428
|
https://api.github.com/repos/huggingface/datasets/issues/5040
|
https://github.com/huggingface/datasets/pull/5040
| 5,040
|
Fix NonMatchingChecksumError in hendrycks_test dataset
|
closed
| 1
| 2022-09-29T09:37:43
| 2022-09-29T10:06:22
| 2022-09-29T10:04:19
|
albertvillanova
|
[
"dataset contribution"
] |
Update metadata JSON.
Fix #5039.
| true
|
1,390,353,315
|
https://api.github.com/repos/huggingface/datasets/issues/5039
|
https://github.com/huggingface/datasets/issues/5039
| 5,039
|
Hendrycks Checksum
|
closed
| 3
| 2022-09-29T06:56:20
| 2022-09-29T10:23:30
| 2022-09-29T10:04:20
|
DanielHesslow
|
[
"dataset bug"
] |
Hi,
The checksum for [hendrycks_test](https://huggingface.co/datasets/hendrycks_test) does not compare correctly, I guess it has been updated on the remote.
```
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://people.eecs.berkeley.edu/~hendrycks/data.tar']
```
| false
|
1,389,631,122
|
https://api.github.com/repos/huggingface/datasets/issues/5038
|
https://github.com/huggingface/datasets/issues/5038
| 5,038
|
`Dataset.unique` showing wrong output after filtering
|
closed
| 2
| 2022-09-28T16:20:35
| 2022-09-30T15:44:25
| 2022-09-30T15:44:25
|
mxschmdt
|
[
"bug"
] |
## Describe the bug
After filtering a dataset, and if no samples remain, `Dataset.unique` will return the unique values of the unfiltered dataset.
## Steps to reproduce the bug
```python
from datasets import Dataset
dataset = Dataset.from_dict({'id': [0]})
dataset = dataset.filter(lambda _: False)
print(dataset.unique('id'))
```
## Expected results
The above code should return an empty list since the dataset is empty.
## Actual results
```bash
[0]
```
## Environment info
- `datasets` version: 2.5.1
- Platform: Linux-5.18.19-100.fc35.x86_64-x86_64-with-glibc2.34
- Python version: 3.9.14
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
| false
|
1,389,244,722
|
https://api.github.com/repos/huggingface/datasets/issues/5037
|
https://github.com/huggingface/datasets/pull/5037
| 5,037
|
Improve CI performance speed of PackagedDatasetTest
|
closed
| 2
| 2022-09-28T12:08:16
| 2022-09-30T16:05:42
| 2022-09-30T16:03:24
|
albertvillanova
|
[] |
This PR improves PackagedDatasetTest CI performance speed. For Ubuntu (latest):
- Duration (without parallelism) before: 334.78s (5.58m)
- Duration (without parallelism) afterwards: 0.48s
The approach is passing a dummy `data_files` argument to load the builder, so that it avoids the slow inferring of it over the entire root directory of the repo.
## Total duration of PackagedDatasetTest
| | Before | Afterwards | Improvement
|---|---:|---:|---:|
| Linux | 334.78s | 0.48s | x700
| Windows | 513.02s | 1.09s | x500
## Durations by each individual sub-test
More accurate durations, running them on GitHub, for Linux (latest).
Before this PR, the total test time (without parallelism) for `tests/test_dataset_common.py::PackagedDatasetTest` is 334.78s (5.58m)
```
39.07s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_imagefolder
38.94s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_audiofolder
34.18s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_parquet
34.12s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_csv
34.00s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_pandas
34.00s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_text
33.86s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_json
10.39s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_class_audiofolder
6.50s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_configs_audiofolder
6.46s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_configs_imagefolder
6.40s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_class_imagefolder
5.77s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_class_csv
5.77s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_class_text
5.74s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_configs_parquet
5.69s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_class_json
5.68s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_configs_pandas
5.67s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_class_parquet
5.67s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_class_pandas
5.66s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_configs_json
5.66s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_configs_csv
5.55s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_configs_text
(42 durations < 0.005s hidden.)
```
With this PR: 0.48s
```
0.09s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_audiofolder
0.08s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_csv
0.08s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_imagefolder
0.06s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_json
0.05s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_class_audiofolder
0.05s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_parquet
0.04s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_pandas
0.03s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_text
(55 durations < 0.005s hidden.)
```
| true
|
1,389,094,075
|
https://api.github.com/repos/huggingface/datasets/issues/5036
|
https://github.com/huggingface/datasets/pull/5036
| 5,036
|
Add oversampling strategy iterable datasets interleave
|
closed
| 1
| 2022-09-28T10:10:23
| 2022-09-30T12:30:48
| 2022-09-30T12:28:23
|
ylacombe
|
[] |
Hello everyone,
Following the issue #4893 and the PR #4831, I propose here an oversampling strategy for a `IterableDataset` list.
The `all_exhausted` strategy stops building the new dataset as soon as all samples in each dataset have been added at least once.
It follows roughly the same logic behind #4831, namely:
- if ``probabilities`` is `None` and the strategy is `all_exhausted`, it simply performs a round robin interleaving that stops when the longest dataset is out of samples. Here the new dataset length will be $maxLengthDataset*nbDataset$.
- if ``probabilities`` is not `None` and the strategy is `all_exhausted`, it keeps trace of the datasets which were out of samples but continues to add them to the new dataset, and stops as soons as every dataset runs out of samples at least once.
In order to be consistent and also to align with the `Dataset` behavior, please note that the behavior of the default strategy (`first_exhausted`) has been changed. Namely, it really stops when a dataset is out of samples whereas it used to stop when receiving the `StopIteration` error.
To give an example of the last note, with the following snippet:
```
>>> from tests.test_iterable_dataset import *
>>> d1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [0, 1, 2]])), {}))
>>> d2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [10, 11, 12, 13]])), {}))
>>> d3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [20, 21, 22, 23, 24]])), {}))
>>> dataset = interleave_datasets([d1, d2, d3])
>>> [x["a"] for x in dataset]
```
The result here will then be `[10, 0, 11, 1, 2]` instead of `[10, 0, 11, 1, 2, 20, 12, 13]`.
I modified the behavior because I found it to be consistent with the under/oversampling approach and because it unified the undersampling and oversampling code, but I stay open to any suggestions.
| true
|
1,388,914,476
|
https://api.github.com/repos/huggingface/datasets/issues/5035
|
https://github.com/huggingface/datasets/pull/5035
| 5,035
|
Fix typos in load docstrings and comments
|
closed
| 1
| 2022-09-28T08:05:07
| 2022-09-28T17:28:40
| 2022-09-28T17:26:15
|
albertvillanova
|
[] |
Minor fix of typos in load docstrings and comments
| true
|
1,388,855,136
|
https://api.github.com/repos/huggingface/datasets/issues/5034
|
https://github.com/huggingface/datasets/pull/5034
| 5,034
|
Update README.md of yahoo_answers_topics dataset
|
closed
| 4
| 2022-09-28T07:17:33
| 2022-10-06T15:56:05
| 2022-10-04T13:49:25
|
borgr
|
[] | null | true
|
1,388,842,236
|
https://api.github.com/repos/huggingface/datasets/issues/5033
|
https://github.com/huggingface/datasets/pull/5033
| 5,033
|
Remove redundant code from some dataset module factories
|
closed
| 1
| 2022-09-28T07:06:26
| 2022-09-28T16:57:51
| 2022-09-28T16:55:12
|
albertvillanova
|
[] |
This PR removes some redundant code introduced by mistake after a refactoring in:
- #4576
| true
|
1,388,270,935
|
https://api.github.com/repos/huggingface/datasets/issues/5032
|
https://github.com/huggingface/datasets/issues/5032
| 5,032
|
new dataset type: single-label and multi-label video classification
|
open
| 6
| 2022-09-27T19:40:11
| 2022-11-02T19:10:13
| null |
fcakyon
|
[
"enhancement"
] |
**Is your feature request related to a problem? Please describe.**
In my research, I am dealing with multi-modal (audio+text+frame sequence) video classification. It would be great if the datasets library supported generating multi-modal batches from a video dataset.
**Describe the solution you'd like**
Assume I have video files having single/multiple labels. I want to train a single/multi-label video classification model. I want datasets to support generating multi-modal batches (audio+frame sequence) from video files. Audio waveform and frame sequence can be extracted from each video clip then I can use any audio, image and video model from transformers library to extract features which will be fed into my model.
**Describe alternatives you've considered**
Currently, I am using https://github.com/facebookresearch/pytorchvideo dataloaders. There seems to be not much alternative.
**Additional context**
I am wiling to open a PR but don't know where to start.
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.