url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1640 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1640/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1640/comments | https://api.github.com/repos/huggingface/datasets/issues/1640/events | https://github.com/huggingface/datasets/pull/1640 | 774,921,836 | MDExOlB1bGxSZXF1ZXN0NTQ1NzI2NzY4 | 1,640 | Fix "'BertTokenizerFast' object has no attribute 'max_len'" | [] | closed | false | null | 0 | 2020-12-26T19:25:41Z | 2020-12-28T17:26:35Z | 2020-12-28T17:26:35Z | null | Tensorflow 2.3.0 gives:
FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead.
Tensorflow 2.4.0 gives:
AttributeError 'BertTokenizerFast' object has no attribute 'max_len' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1640/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1640/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1640.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1640",
"merged_at": "2020-12-28T17:26:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1640.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1640"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1328/comments | https://api.github.com/repos/huggingface/datasets/issues/1328/events | https://github.com/huggingface/datasets/pull/1328 | 759,634,907 | MDExOlB1bGxSZXF1ZXN0NTM0NjA2MDM1 | 1,328 | Added the NewsPH Raw dataset and corresponding dataset card | [] | closed | false | null | 0 | 2020-12-08T17:25:45Z | 2020-12-10T11:04:34Z | 2020-12-10T11:04:34Z | null | This PR adds the original NewsPH dataset which is used to autogenerate the NewsPH-NLI dataset. Reopened a new PR as the previous one had problems.
Paper: https://arxiv.org/abs/2010.11574
Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1328/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1328/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1328.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1328",
"merged_at": "2020-12-10T11:04:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1328.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1328"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4938/comments | https://api.github.com/repos/huggingface/datasets/issues/4938/events | https://github.com/huggingface/datasets/pull/4938 | 1,363,429,228 | PR_kwDODunzps4-coaB | 4,938 | Remove main branch rename notice | [] | closed | false | null | 1 | 2022-09-06T15:03:05Z | 2022-09-06T16:46:11Z | 2022-09-06T16:43:53Z | null | We added a notice in README.md to show that we renamed the master branch to main, but we can remove it now (it's been 2 months)
I also unpinned the github issue about the branch renaming | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4938/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4938/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4938.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4938",
"merged_at": "2022-09-06T16:43:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4938.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4938"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4253/comments | https://api.github.com/repos/huggingface/datasets/issues/4253/events | https://github.com/huggingface/datasets/pull/4253 | 1,219,286,408 | PR_kwDODunzps42-c8Q | 4,253 | Create metric cards for mean IOU | [] | closed | false | null | 1 | 2022-04-28T20:58:27Z | 2022-04-29T17:44:47Z | 2022-04-29T17:38:06Z | null | Proposing a metric card for mIoU :rocket:
sorry for spamming you with review requests, @albertvillanova ! :hugs: | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4253/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4253/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4253.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4253",
"merged_at": "2022-04-29T17:38:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4253.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4253"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2658 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2658/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2658/comments | https://api.github.com/repos/huggingface/datasets/issues/2658/events | https://github.com/huggingface/datasets/issues/2658 | 946,139,532 | MDU6SXNzdWU5NDYxMzk1MzI= | 2,658 | Can't pass `sep=None` to load_dataset("csv", ...) to infer the separator via pandas.read_csv | [] | closed | false | null | 0 | 2021-07-16T10:05:44Z | 2021-07-16T12:46:06Z | 2021-07-16T12:46:06Z | null | When doing `load_dataset("csv", sep=None)`, the `sep` passed to `pd.read_csv` is still the default `sep=","` instead, which makes it impossible to make the csv loader infer the separator.
Related to https://github.com/huggingface/datasets/pull/2656
cc @SBrandeis | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2658/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2658/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2331/comments | https://api.github.com/repos/huggingface/datasets/issues/2331/events | https://github.com/huggingface/datasets/issues/2331 | 879,031,427 | MDU6SXNzdWU4NzkwMzE0Mjc= | 2,331 | Add Topical-Chat | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 0 | 2021-05-07T13:43:59Z | 2021-05-07T13:43:59Z | null | null | ## Adding a Dataset
- **Name:** Topical-Chat
- **Description:** a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don’t have explicitly defined roles
- **Paper:** https://www.isca-speech.org/archive/Interspeech_2019/pdfs/3079.pdf
- **Data:** https://github.com/alexa/Topical-Chat
- **Motivation:** Good quality, knowledge-grounded dataset that spans a broad range of topics
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2331/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2331/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/252/comments | https://api.github.com/repos/huggingface/datasets/issues/252/events | https://github.com/huggingface/datasets/issues/252 | 634,563,239 | MDU6SXNzdWU2MzQ1NjMyMzk= | 252 | NonMatchingSplitsSizesError error when reading the IMDB dataset | [] | closed | false | null | 4 | 2020-06-08T12:26:24Z | 2021-08-27T15:20:58Z | 2020-06-08T14:01:26Z | null | Hi!
I am trying to load the `imdb` dataset with this line:
`dataset = nlp.load_dataset('imdb', data_dir='/A/PATH', cache_dir='/A/PATH')`
but I am getting the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/load.py", line 517, in load_dataset
save_infos=save_infos,
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/builder.py", line 363, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/builder.py", line 421, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=5929447, num_examples=4537, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]
```
Am I overlooking something? Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/252/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/252/timeline | null | completed | null | null | false | [
"I just tried on my side and I didn't encounter your problem.\r\nApparently the script doesn't generate all the examples on your side.\r\n\r\nCan you provide the version of `nlp` you're using ?\r\nCan you try to clear your cache and re-run the code ?",
"I updated it, that was it, thanks!",
"Hello, I am facing the same problem... how do you clear the huggingface cache?",
"Hi ! The cache is at ~/.cache/huggingface\r\nYou can just delete this folder if needed :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/2125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2125/comments | https://api.github.com/repos/huggingface/datasets/issues/2125/events | https://github.com/huggingface/datasets/issues/2125 | 842,690,570 | MDU6SXNzdWU4NDI2OTA1NzA= | 2,125 | Is dataset timit_asr broken? | [] | closed | false | null | 2 | 2021-03-28T08:30:18Z | 2021-03-28T12:29:25Z | 2021-03-28T12:29:25Z | null | Using `timit_asr` dataset, I saw all records are the same.
``` python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
display(HTML(df.to_html()))
show_random_elements(timit['train'].remove_columns(["file", "phonetic_detail", "word_detail", "dialect_region", "id",
"sentence_type", "speaker_id"]), num_examples=20)
```
`output`
<img width="312" alt="Screen Shot 2021-03-28 at 17 29 04" src="https://user-images.githubusercontent.com/42398050/112746646-21acee80-8feb-11eb-84f3-dbb5d4269724.png">
I double-checked it [here](https://huggingface.co/datasets/viewer/), and met the same problem.
<img width="1374" alt="Screen Shot 2021-03-28 at 17 32 07" src="https://user-images.githubusercontent.com/42398050/112746698-9bdd7300-8feb-11eb-97ed-5babead385f4.png">
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2125/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2125/timeline | null | completed | null | null | false | [
"Hi,\r\n\r\nthanks for the report, but this is a duplicate of #2052. ",
"@mariosasko \r\nThank you for your quick response! Following #2052, I've fixed the problem."
] |
https://api.github.com/repos/huggingface/datasets/issues/5402 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5402/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5402/comments | https://api.github.com/repos/huggingface/datasets/issues/5402/events | https://github.com/huggingface/datasets/issues/5402 | 1,517,409,429 | I_kwDODunzps5acdSV | 5,402 | Missing state.json when creating a cloud dataset using a dataset_builder | [] | open | false | null | 3 | 2023-01-03T13:39:59Z | 2023-01-04T17:23:57Z | null | null | ### Describe the bug
Using `load_dataset_builder` to create a builder, run `download_and_prepare` do upload it to S3. However when trying to load it, there are missing `state.json` files. Complete example:
```python
from aiobotocore.session import AioSession as Session
from datasets import load_from_disk, load_datase, load_dataset_builder
import s3fs
storage_options = {"session": Session()}
fs = s3fs.S3FileSystem(**storage_options)
output_dir = "s3://bucket/imdb"
builder = load_dataset_builder("imdb")
builder.download_and_prepare(output_dir, storage_options=storage_options)
load_from_disk(output_dir, fs=fs) # ERROR
# [Errno 2] No such file or directory: '/tmp/tmpy22yys8o/bucket/imdb/state.json'
```
As a comparison, if you use the non lazy `load_dataset`, it works and the S3 folder has different structure + state.json files. Example:
```python
from aiobotocore.session import AioSession as Session
from datasets import load_from_disk, load_dataset, load_dataset_builder
import s3fs
storage_options = {"session": Session()}
fs = s3fs.S3FileSystem(**storage_options)
output_dir = "s3://bucket/imdb"
dataset = load_dataset("imdb",)
dataset.save_to_disk(output_dir, fs=fs)
load_from_disk(output_dir, fs=fs) # WORKS
```
You still want the 1st option for the laziness and the parquet conversion. Thanks!
### Steps to reproduce the bug
```python
from aiobotocore.session import AioSession as Session
from datasets import load_from_disk, load_datase, load_dataset_builder
import s3fs
storage_options = {"session": Session()}
fs = s3fs.S3FileSystem(**storage_options)
output_dir = "s3://bucket/imdb"
builder = load_dataset_builder("imdb")
builder.download_and_prepare(output_dir, storage_options=storage_options)
load_from_disk(output_dir, fs=fs) # ERROR
# [Errno 2] No such file or directory: '/tmp/tmpy22yys8o/bucket/imdb/state.json'
```
BTW, you need the AioSession as s3fs is now based on aiobotocore, see https://github.com/fsspec/s3fs/issues/385.
### Expected behavior
Expected to be able to load the dataset from S3.
### Environment info
```
s3fs 2022.11.0
s3transfer 0.6.0
datasets 2.8.0
aiobotocore 2.4.2
boto3 1.24.59
botocore 1.27.59
```
python 3.7.15. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5402/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5402/timeline | null | null | null | null | false | [
"`load_from_disk` must be used on datasets saved using `save_to_disk`: they correspond to fully serialized datasets including their state.\r\n\r\nOn the other hand, `download_and_prepare` just downloads the raw data and convert them to arrow (or parquet if you want). We are working on allowing you to reload a dataset saved on S3 with `download_and_prepare` using `load_dataset` in #5281 \r\n\r\nFor now I'd encourage you to keep using `save_to_disk`",
"Thanks, I'll follow that issue. \r\n\r\nI was following the [cloud storage](https://huggingface.co/docs/datasets/filesystems) docs section and perhaps I'm missing some part of the flow; start with `load_dataset_builder` + `download_and_prepare`. You say I need an explicit `save_to_disk` but what object needs to be saved? the builder? is that related to the other issue?",
"Right now `load_dataset_builder` + `download_and_prepare` is to be used with tools like dask or spark, but `load_dataset` will support private cloud storage soon as well so you'll be able to reload the dataset with `datasets`.\r\n\r\nRight now the only function that can load a dataset from a cloud storage is `load_from_disk`, that must be used with a dataset serialized with `save_to_disk`."
] |
https://api.github.com/repos/huggingface/datasets/issues/3041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3041/comments | https://api.github.com/repos/huggingface/datasets/issues/3041/events | https://github.com/huggingface/datasets/pull/3041 | 1,018,911,385 | PR_kwDODunzps4s1ZAc | 3,041 | Load private data files + use glob on ZIP archives for json/csv/etc. module inference | [] | closed | false | null | 4 | 2021-10-06T18:16:36Z | 2021-10-12T15:25:48Z | 2021-10-12T15:25:46Z | null | As mentioned in https://github.com/huggingface/datasets/issues/3032 loading data files from private repository isn't working correctly because of the data files resolved.
#2986 did a refactor of the data files resolver. I added authentication to it.
I also improved it to glob inside ZIP archives to look for json/csv/etc. files and infer which dataset builder (json/csv/etc.) to use.
Fix https://github.com/huggingface/datasets/issues/3032
Note that #2986 needs to get merged first | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3041/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3041/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3041.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3041",
"merged_at": "2021-10-12T15:25:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3041.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3041"
} | true | [
"I have an error on windows:\r\n```python\r\naiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host moon-staging.huggingface.co:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')]\r\n```\r\nat the `fsspec` call in `xglob`:\r\n```python\r\nfs, *_ = fsspec.get_fs_token_paths(urlpath, storage_options=storage_options)\r\n```\r\n\r\nLooks like the windows CI has an SSL issue... ",
"I can reproduce it on my windows machine. On linux it works fine though",
"I'm just skipping the windows test for now",
"The Windows CI failure seems unrelated to this PR\r\n```python\r\nERROR tests/test_arrow_dataset.py::test_dummy_dataset_serialize_s3\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/2627 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2627/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2627/comments | https://api.github.com/repos/huggingface/datasets/issues/2627/events | https://github.com/huggingface/datasets/pull/2627 | 941,503,349 | MDExOlB1bGxSZXF1ZXN0Njg3MzczMDg1 | 2,627 | Minor fix tests with Windows paths | [] | closed | false | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
} | 0 | 2021-07-11T17:55:48Z | 2021-07-12T14:08:47Z | 2021-07-12T08:34:50Z | null | Minor fix tests with Windows paths. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2627/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2627/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2627.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2627",
"merged_at": "2021-07-12T08:34:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2627.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2627"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4137/comments | https://api.github.com/repos/huggingface/datasets/issues/4137/events | https://github.com/huggingface/datasets/pull/4137 | 1,199,000,453 | PR_kwDODunzps419D6A | 4,137 | Add single dataset citations for TweetEval | [] | closed | false | null | 2 | 2022-04-10T11:51:54Z | 2022-04-12T07:57:22Z | 2022-04-12T07:51:15Z | null | This PR adds single data citations as per request of the original creators of the TweetEval dataset.
This is a recent email from the creator:
> Could I ask you a favor? Would you be able to add at the end of the README the citations of the single datasets as well? You can just copy our readme maybe? https://github.com/cardiffnlp/tweeteval#citing-tweeteval
(just to be sure that the creator of the single datasets also get credits when tweeteval is used)
Please let me know if this looks okay or if any changes are needed.
Thanks,
Gunjan
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4137/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4137/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4137.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4137",
"merged_at": "2022-04-12T07:51:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4137.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4137"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The `test_dataset_cards` method is failing with the error:\r\n\r\n```\r\nif error_messages:\r\n> raise ValueError(\"\\n\".join(error_messages))\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE The following typing errors are found: {'annotations_creators': \"(Expected `typing.List` with length > 0. Found value of type: `<class 'list'>`, with length: 0.\\n)\\nOR\\n(Expected `typing.Dict` with length > 0. Found value of type: `<class 'list'>`, with length: 0.\\n)\"}\r\n```\r\n\r\nAdding `found` as annotation creators."
] |
https://api.github.com/repos/huggingface/datasets/issues/3638 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3638/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3638/comments | https://api.github.com/repos/huggingface/datasets/issues/3638/events | https://github.com/huggingface/datasets/issues/3638 | 1,115,725,703 | I_kwDODunzps5CgJ-H | 3,638 | AutoTokenizer hash value got change after datasets.map | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 11 | 2022-01-27T03:19:03Z | 2022-08-26T07:47:56Z | null | null | ## Describe the bug
AutoTokenizer hash value got change after datasets.map
## Steps to reproduce the bug
1. trash huggingface datasets cache
2. run the following code:
```python
from transformers import AutoTokenizer, BertTokenizer
from datasets import load_dataset
from datasets.fingerprint import Hasher
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
def tokenize_function(example):
return tokenizer(example["sentence1"], example["sentence2"], truncation=True)
raw_datasets = load_dataset("glue", "mrpc")
print(Hasher.hash(tokenize_function))
print(Hasher.hash(tokenizer))
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
print(Hasher.hash(tokenize_function))
print(Hasher.hash(tokenizer))
```
got
```
Reusing dataset glue (/home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1112.35it/s]
f4976bb4694ebc51
3fca35a1fd4a1251
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 6.96ba/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 15.25ba/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 5.81ba/s]
d32837619b7d7d01
5fd925c82edd62b6
```
3. run raw_datasets.map(tokenize_function, batched=True) again and see some dataset are not using cache.
## Expected results
`AutoTokenizer` work like specific Tokenizer (The hash value don't change after map):
```python
from transformers import AutoTokenizer, BertTokenizer
from datasets import load_dataset
from datasets.fingerprint import Hasher
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
def tokenize_function(example):
return tokenizer(example["sentence1"], example["sentence2"], truncation=True)
raw_datasets = load_dataset("glue", "mrpc")
print(Hasher.hash(tokenize_function))
print(Hasher.hash(tokenizer))
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
print(Hasher.hash(tokenize_function))
print(Hasher.hash(tokenizer))
```
```
Reusing dataset glue (/home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1091.22it/s]
46d4b31f54153fc7
5b8771afd8d43888
Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6b07ff82ae9d5c51.arrow
Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-af738a6d84f3864b.arrow
Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-531d2a603ba713c1.arrow
46d4b31f54153fc7
5b8771afd8d43888
```
## Environment info
- `datasets` version: 1.18.0
- Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.27
- Python version: 3.9.7
- PyArrow version: 6.0.1
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3638/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3638/timeline | null | null | null | null | false | [
"This issue was original reported at https://github.com/huggingface/transformers/issues/14931 and It seems like this issue also occur with other AutoClass like AutoFeatureExtractor.",
"Thanks for moving the issue here !\r\n\r\nI wasn't able to reproduce the issue on my env (the hashes stay the same):\r\n```\r\n- `transformers` version: 1.15.0\r\n- `tokenizers` version: 0.10.3\r\n- `datasets` version: 1.18.1\r\n- `dill` version: 0.3.4\r\n- Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11\r\n- Python version: 3.7.10\r\n- PyArrow version: 6.0.1\r\n```\r\nHowever I was able to reproduce it on Google Colab (the hashes end up different):\r\n```\r\n- `transformers` version: 1.15.0\r\n- `tokenizers` version: 0.10.3\r\n- `datasets` version: 1.18.1\r\n- `dill` version: 0.3.4\r\n- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.12\r\n- PyArrow version: 3.0.0\r\n```\r\nI'll investigate why it doesn't work properly on Google Colab :)",
"I found the issue: the tokenizer has something inside it that changes.\r\n\r\nBefore the call, `tokenizer._tokenizer.truncation` is None, and after the call it changes to this for some reason:\r\n```\r\n{'max_length': 512, 'strategy': 'longest_first', 'stride': 0}\r\n```\r\n\r\nDoes anybody know why calling the tokenizer would change its state this way ? cc @Narsil @SaulLu maybe ?",
"`tokenizer.encode(..)` does not accept argument like max_length, strategy or stride.\r\n\r\nIn `tokenizers` you have to modify the tokenizer state by setting various `TruncationParams` (and/or `PaddingParams`).\r\nHowever, since this is modifying the state, you need to mutably borrow the tokenizer (a rust concept). The key principle is that there can ever be only 1 mutable borrow at a time during the span of the tokenizer lifecycle.\r\n\r\nBecause of this, if `transformers` blindly set `TruncationParams` and `PaddingParams` on every call, it would cause the tokenizer to crash (or make the various threads accessing it hang, which is not necessarily better).\r\n\r\nIn order to avoid that, we decided to handle it this way : https://github.com/huggingface/transformers/pull/12550 . \r\n\r\nWhich should explain the state of the tokenizer being modified (hence its hash).\r\n\r\nNow for a temporary solution, simply encoding once with the tokenizer should give it it's proper hash (since by default the tokenizer doesn't have this state, looks at the first encoding call, and creates it).\r\n\r\nWe could try and set these 2 dicts at initialization time, but it wouldn't work if a user modified the tokenizer state later\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(..)\r\ntokenizer.truncation_side = \"left\"\r\n# Now we have a difference between `tokenizer._tokenizer.truncation` and `tokenizer.truncation_side`\r\n```\r\nIf we wanted to fix it correctly it would mean mapping every assignation to it's proper location on `tokenizer.{padding/truncation}`\r\n\r\nI think it's important to note that we cannot guarantee a tokenizer' hash remains the same if *any* of those parameters are modified through the `.map` function.\r\n\r\nEdit: Another option would be to override the default __hash__ function, but I don't know if there's a sound implementation that could fit.",
"Thanks a lot for the explanation !\r\nI think if we set these 2 dicts at initialization time it would be amazing already\r\n\r\nShall we open an issue in `transformers` to ask for these dictionaries to be set when the tokenizer is instantiated ?\r\n\r\n> Edit: Another option would be to override the default hash function, but I don't know if there's a sound implementation that could fit.\r\n\r\nIn `datasets` we can easily have custom hashing for objects of the other HF libraries if we want. For example we ignore the cache some tokenizers have. However in this specific case it touches parameters that may change the behavior of the tokenizer itself. I'm not sure the logic that determines how a tokenizer behaves should be in `datasets`",
"A hack we could have in the `datasets` lib would be to call the tokenizer before hashing it in order to set all its parameters correctly - but it sounds a lot like a hack and I'm not sure this can work in the long run",
"Fully agree with everything you said. \r\n\r\nI think the best course of action is creating an issue in `transformers`. I can start the work on this.\r\nI think the code changes are fairly simple. Making a sound test + not breaking other stuff might be different :D",
"It should be noted that this problem also occurs in other AutoClasses, such as AutoFeatureExtractor, so I don't think handling it in Datasets is a long-term practice either.",
"> I think the best course of action is creating an issue in `transformers`. I can start the work on this.\r\n\r\n@Narsil Hi, I reopen this issue in `transformers` https://github.com/huggingface/transformers/issues/14931",
"Here is @Narsil comment from https://github.com/huggingface/transformers/issues/14931#issuecomment-1074981569\r\n> # TL;DR\r\n> Call the function once on a dummy example beforehand will fix it.\r\n> \r\n> ```python\r\n> tokenizer(\"Some\", \"test\", truncation=True)\r\n> ```\r\n> \r\n> # Long answer\r\n> If I remember the last status, it's hard doing anything, since the call itself\r\n> \r\n> ```python\r\n> tokenizer(example[\"sentence1\"], example[\"sentence2\"], truncation=True)\r\n> ```\r\n> \r\n> will modify the tokenizer. It's the `truncation=True` that modifies the tokenizer to put it into truncation mode if you will. Calling the tokenizer once with that argument would fix the cache.\r\n> \r\n> Finding a fix that :\r\n> \r\n> * Doesn't imply a huge chunk of work on `tokenizers` (with potential loss of performance, and breaking backward compatibility)\r\n> * Doesn't imply `datasets` running a first pass of the loop\r\n> * Doesn't imply `datasets` looking at the map function itself\r\n> * Uses a sound `hash` for this object in `datasets`.\r\n> \r\n> is IIRC impossible for this use case.\r\n> \r\n> I can explain a bit more why the first option is not desirable.\r\n> \r\n> In order to \"fix\" this for tokenizers, we would need to make `tokenizer(..)` purely without side effects. This means that the \"options\" of tokenization (like `truncation` and `padding` at least) would have\r\n",
"For me this workaround only works if I don't pass the `num_proc=X` argument to `datasets.map`"
] |
https://api.github.com/repos/huggingface/datasets/issues/221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/221/comments | https://api.github.com/repos/huggingface/datasets/issues/221/events | https://github.com/huggingface/datasets/pull/221 | 627,300,648 | MDExOlB1bGxSZXF1ZXN0NDI1MTI5OTc0 | 221 | Fix tests/test_dataset_common.py | [] | closed | false | null | 1 | 2020-05-29T14:12:15Z | 2020-06-01T12:20:42Z | 2020-05-29T15:02:23Z | null | When I run the command `RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_arcd` while working on #220. I get the error ` unexpected keyword argument "'download_and_prepare_kwargs'"` at the level of `load_dataset`. Indeed, this [function](https://github.com/huggingface/nlp/blob/master/src/nlp/load.py#L441) no longer has the argument `download_and_prepare_kwargs` but rather `download_config`. So here I change the tests accordingly. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/221/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/221/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/221.diff",
"html_url": "https://github.com/huggingface/datasets/pull/221",
"merged_at": "2020-05-29T15:02:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/221.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/221"
} | true | [
"Thanks ! Good catch :)\r\n\r\nTo fix the CI you can do:\r\n1 - rebase from master\r\n2 - then run `make style` as specified in [CONTRIBUTING.md](https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md) ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/3567 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3567/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3567/comments | https://api.github.com/repos/huggingface/datasets/issues/3567/events | https://github.com/huggingface/datasets/pull/3567 | 1,100,296,696 | PR_kwDODunzps4w2xDl | 3,567 | Fix push to hub to allow individual split push | [] | closed | false | null | 1 | 2022-01-12T12:42:58Z | 2022-07-27T12:11:12Z | 2022-07-27T12:11:11Z | null | # Description of the issue
If one decides to push a split on a datasets repo, he uploads the dataset and overrides the config. However previous config splits end up being lost despite still having the dataset necessary.
The new flow is the following:
- query the old config from the repo
- update into a new config (add/overwrite new split for example)
- push the new config
# Side fix
- `repo_id` in HfFileSystem was wrongly typed.
- I've added `indent=2` as it becomes much easier to read now.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3567/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3567/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3567.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3567",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3567.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3567"
} | true | [
"This has been addressed in https://github.com/huggingface/datasets/pull/4415. Closing."
] |
https://api.github.com/repos/huggingface/datasets/issues/3023 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3023/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3023/comments | https://api.github.com/repos/huggingface/datasets/issues/3023/events | https://github.com/huggingface/datasets/pull/3023 | 1,015,923,031 | PR_kwDODunzps4srQ4i | 3,023 | Fix typo | [] | closed | false | null | 0 | 2021-10-05T06:06:11Z | 2021-10-05T11:56:55Z | 2021-10-05T11:56:55Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3023/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3023/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3023.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3023",
"merged_at": "2021-10-05T11:56:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3023.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3023"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/459/comments | https://api.github.com/repos/huggingface/datasets/issues/459/events | https://github.com/huggingface/datasets/pull/459 | 669,545,437 | MDExOlB1bGxSZXF1ZXN0NDU5OTAxMjEy | 459 | [Breaking] Update Dataset and DatasetDict API | [] | closed | false | null | 0 | 2020-07-31T08:11:33Z | 2020-08-26T08:28:36Z | 2020-08-26T08:28:35Z | null | This PR contains a few breaking changes so it's probably good to keep it for the next (major) release:
- rename the `flatten`, `drop` and `dictionary_encode_column` methods in `flatten_`, `drop_` and `dictionary_encode_column_` to indicate that these methods have in-place effects as discussed in #166. From now on we should keep the convention of having a trailing underscore for methods which have an in-place effet. I also adopt the conversion of not returning the (self) dataset for these methods. This is different than what PyTorch does for instance (`model.to()` is in-place but return the self model) but I feel like it's a safer approach in terms of UX.
- remove the `dataset.columns` property which returns a low-level Apache Arrow object and should not be used by users. Similarly, remove `dataset. nbytes` which we don't really want to expose in this bare-bone format.
- add a few more properties and methods to `DatasetDict` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/459/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/459/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/459.diff",
"html_url": "https://github.com/huggingface/datasets/pull/459",
"merged_at": "2020-08-26T08:28:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/459.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/459"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3150/comments | https://api.github.com/repos/huggingface/datasets/issues/3150/events | https://github.com/huggingface/datasets/issues/3150 | 1,033,831,530 | I_kwDODunzps49nwRq | 3,150 | Faiss _is_ available on Windows | [] | closed | false | null | 1 | 2021-10-22T18:07:16Z | 2021-11-02T10:06:03Z | 2021-11-02T10:06:03Z | null | In the setup file, I find the following:
https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/setup.py#L171
However, FAISS does install perfectly fine on Windows on my system. You can also confirm this on the [PyPi page](https://pypi.org/project/faiss-cpu/#files), where Windows wheels are available. Maybe this was true for older versions? For current versions, this can be removed I think.
(This isn't really a bug but didn't know how else to tag.)
If you agree I can do a quick PR and remove that line. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3150/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3150/timeline | null | completed | null | null | false | [
"Sure, feel free to open a PR."
] |
https://api.github.com/repos/huggingface/datasets/issues/264 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/264/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/264/comments | https://api.github.com/repos/huggingface/datasets/issues/264/events | https://github.com/huggingface/datasets/pull/264 | 637,106,170 | MDExOlB1bGxSZXF1ZXN0NDMzMTU0ODQ4 | 264 | Fix small issues creating dataset | [] | closed | false | null | 0 | 2020-06-11T15:20:16Z | 2020-06-12T08:15:57Z | 2020-06-12T08:15:56Z | null | Fix many small issues mentioned in #249:
- don't force to install apache beam for commands
- fix None cache dir when using `dl_manager.download_custom`
- added new extras in `setup.py` named `dev` that contains tests and quality dependencies
- mock dataset sizes when running tests with dummy data
- add a note about the naming convention of datasets (camel case - snake case) in CONTRIBUTING.md
This should help users create their datasets.
Next step is the `add_dataset.md` docs :) | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/264/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/264/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/264.diff",
"html_url": "https://github.com/huggingface/datasets/pull/264",
"merged_at": "2020-06-12T08:15:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/264.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/264"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3586 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3586/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3586/comments | https://api.github.com/repos/huggingface/datasets/issues/3586/events | https://github.com/huggingface/datasets/issues/3586 | 1,106,455,672 | I_kwDODunzps5B8yx4 | 3,586 | Revisit `enable/disable_` toggle function prefix | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2022-01-18T04:09:55Z | 2022-03-14T15:01:08Z | 2022-03-14T15:01:08Z | null | As discussed in https://github.com/huggingface/transformers/pull/15167, we should revisit the `enable/disable_` toggle function prefix, potentially in favor of `set_enabled_`. Concretely, this translates to
- De-deprecating `disable_progress_bar()`
- Adding `enable_progress_bar()`
- On the caching side, adding `enable_caching` and `disable_caching`
Additional decisions have to be made with regards to the existing `set_enabled_X` functions; that is, whether to keep them as is or deprecate them in favor of the aforementioned functions.
cc @mariosasko @lhoestq | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3586/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3586/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1227 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1227/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1227/comments | https://api.github.com/repos/huggingface/datasets/issues/1227/events | https://github.com/huggingface/datasets/pull/1227 | 758,049,060 | MDExOlB1bGxSZXF1ZXN0NTMzMjg1ODIx | 1,227 | readme: remove link to Google's responsible AI practices | [] | closed | false | null | 0 | 2020-12-06T23:17:22Z | 2020-12-07T08:35:19Z | 2020-12-06T23:20:41Z | null | ...maybe we'll find a company that reallly stands behind responsible AI practices ;) | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1227/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1227/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1227.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1227",
"merged_at": "2020-12-06T23:20:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1227.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1227"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1580/comments | https://api.github.com/repos/huggingface/datasets/issues/1580/events | https://github.com/huggingface/datasets/pull/1580 | 768,111,377 | MDExOlB1bGxSZXF1ZXN0NTQwNjQxNDQ3 | 1,580 | made suggested changes in diplomacy_detection.py | [] | closed | false | null | 0 | 2020-12-15T19:52:00Z | 2020-12-16T10:27:52Z | 2020-12-16T10:27:52Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1580/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1580/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1580.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1580",
"merged_at": "2020-12-16T10:27:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1580.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1580"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/792 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/792/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/792/comments | https://api.github.com/repos/huggingface/datasets/issues/792/events | https://github.com/huggingface/datasets/issues/792 | 734,693,652 | MDU6SXNzdWU3MzQ2OTM2NTI= | 792 | KILT dataset: empty string in triviaqa input field | [] | closed | false | null | 1 | 2020-11-02T17:33:54Z | 2020-11-05T10:34:59Z | 2020-11-05T10:34:59Z | null | # What happened
Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark)
# Versions
KILT version is `1.0.0`
`datasets` version is `1.1.2`
[more here](https://gist.github.com/PaulLerner/3768c8d25f723edbac20d99b6a4056c1)
# How to reproduce
```py
In [1]: from datasets import load_dataset
In [4]: dataset = load_dataset("kilt_tasks")
# everything works fine, removed output for a better readibility
Dataset kilt_tasks downloaded and prepared to /people/lerner/.cache/huggingface/datasets/kilt_tasks/all_tasks/1.0.0/821c4295a2c35db2847585918d9c47d7f028f1a26b78825d8e77cd3aeb2621a1. Subsequent calls will reuse this data.
# empty string in triviaqa input field
In [36]: dataset['train_triviaqa'][0]
Out[36]:
{'id': 'dpql_5197',
'input': '',
'meta': {'left_context': '',
'mention': '',
'obj_surface': {'text': []},
'partial_evidence': {'end_paragraph_id': [],
'meta': [],
'section': [],
'start_paragraph_id': [],
'title': [],
'wikipedia_id': []},
'right_context': '',
'sub_surface': {'text': []},
'subj_aliases': {'text': []},
'template_questions': {'text': []}},
'output': {'answer': ['five £', '5 £', '£5', 'five £'],
'meta': [],
'provenance': [{'bleu_score': [1.0],
'end_character': [248],
'end_paragraph_id': [30],
'meta': [],
'section': ['Section::::Question of legal tender.\n'],
'start_character': [246],
'start_paragraph_id': [30],
'title': ['Banknotes of the pound sterling'],
'wikipedia_id': ['270680']}]}}
In [35]: dataset['train_triviaqa']['input'][:10]
Out[35]: ['', '', '', '', '', '', '', '', '', '']
# same with test set
In [37]: dataset['test_triviaqa']['input'][:10]
Out[37]: ['', '', '', '', '', '', '', '', '', '']
# works fine with natural questions
In [34]: dataset['train_nq']['input'][:10]
Out[34]:
['how i.met your mother who is the mother',
'who had the most wins in the nfl',
'who played mantis guardians of the galaxy 2',
'what channel is the premier league on in france',
"god's not dead a light in the darkness release date",
'who is the current president of un general assembly',
'when do the eclipse supposed to take place',
'what is the name of the sea surrounding dubai',
'who holds the nba record for most points in a career',
'when did the new maze runner movie come out']
```
Stay safe :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/792/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/792/timeline | null | completed | null | null | false | [
"Just found out about https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md\r\n(Not very clear in https://huggingface.co/datasets/kilt_tasks links to http://github.com/huggingface/datasets/datasets/kilt_tasks/README.md which is dead, closing the issue though :))"
] |
https://api.github.com/repos/huggingface/datasets/issues/2086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2086/comments | https://api.github.com/repos/huggingface/datasets/issues/2086/events | https://github.com/huggingface/datasets/pull/2086 | 836,249,587 | MDExOlB1bGxSZXF1ZXN0NTk2Nzg0Mjcz | 2,086 | change user permissions to -rw-r--r-- | [] | closed | false | null | 1 | 2021-03-19T18:14:56Z | 2021-03-24T13:59:04Z | 2021-03-24T13:59:04Z | null | Fix for #2065 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2086/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2086/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2086.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2086",
"merged_at": "2021-03-24T13:59:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2086.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2086"
} | true | [
"I tried this with `ade_corpus_v2` dataset. `ade_corpus_v2-train.arrow` (downloaded dataset) and `cache-25d41a4d3c2d8a25.arrow` (ran a mapping function on the dataset) both had file permission with octal value of `0644`. "
] |
https://api.github.com/repos/huggingface/datasets/issues/1024 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1024/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1024/comments | https://api.github.com/repos/huggingface/datasets/issues/1024/events | https://github.com/huggingface/datasets/pull/1024 | 755,664,113 | MDExOlB1bGxSZXF1ZXN0NTMxMzMzOTc5 | 1,024 | Add ZEST: ZEroShot learning from Task descriptions | [] | closed | false | null | 1 | 2020-12-02T22:41:20Z | 2020-12-03T19:21:00Z | 2020-12-03T16:09:15Z | null | Adds the ZEST dataset on zero-shot learning from task descriptions from AI2.
- Webpage: https://allenai.org/data/zest
- Paper: https://arxiv.org/abs/2011.08115
The nature of this dataset made the supported task tags tricky if you wouldn't mind giving any feedback @yjernite. Also let me know if you think we should have a `other-task-generalization` or something like that... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1024/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1024/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1024.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1024",
"merged_at": "2020-12-03T16:09:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1024.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1024"
} | true | [
"Looks good to me, we can ping the authors for more info later. And yes apply `other-task` labels liberally, we can sort them out later :) \r\n\r\nLooks ready to merge when you're ready @joeddav "
] |
https://api.github.com/repos/huggingface/datasets/issues/1129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1129/comments | https://api.github.com/repos/huggingface/datasets/issues/1129/events | https://github.com/huggingface/datasets/pull/1129 | 757,255,492 | MDExOlB1bGxSZXF1ZXN0NTMyNjYxNzM2 | 1,129 | Adding initial version of cord-19 dataset | [] | closed | false | null | 5 | 2020-12-04T17:03:17Z | 2021-02-09T10:22:35Z | 2021-02-09T10:18:06Z | null | Initial version only reading the metadata in CSV.
### Checklist:
- [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template
- [x] Fill the _DESCRIPTION and _CITATION variables
- [x] Implement _infos(), _split_generators() and _generate_examples()
- [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class.
- [x] Generate the metadata file dataset_infos.json for all configurations
- [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card README.md using the template and at least fill the tags
- [x] Both tests for the real data and the dummy data pass.
### TODO:
- [x] add more metadata
- [x] add full text
- [x] add pre-computed document embedding | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1129/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1129/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1129.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1129",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1129.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1129"
} | true | [
"Hi @ggdupont !\r\nHave you had a chance to take a look at my suggestions ?\r\nFeel free to ping me if you have questions or when you're ready for a review",
"> Hi @ggdupont !\r\n> Have you had a chance to take a look at my suggestions ?\r\n> Feel free to ping me if you have questions or when you're ready for a review\r\n\r\nYes I did, just busy period (and no time on weekend right now ;-) )",
"With some delay, reduced the dummy data and had t rebase",
"Thanks !\r\n\r\nIt looks like the rebase messed up the github diff for this PR (2.000+ files changed)\r\nCould you create another branch and another PR please ?",
"Cleaned PR: https://github.com/huggingface/datasets/pull/1850"
] |
https://api.github.com/repos/huggingface/datasets/issues/2099 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2099/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2099/comments | https://api.github.com/repos/huggingface/datasets/issues/2099/events | https://github.com/huggingface/datasets/issues/2099 | 838,523,819 | MDU6SXNzdWU4Mzg1MjM4MTk= | 2,099 | load_from_disk takes a long time to load local dataset | [] | closed | false | null | 8 | 2021-03-23T09:28:37Z | 2021-03-23T17:12:16Z | 2021-03-23T17:12:16Z | null | I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helping (the total size seems to be smaller though).
Does anyone know what could be the issue? Or does the casting of that column to `int8` need to happen in the function that writes the arrow table instead of in the `map` where I create the list of integers?
Tagging @lhoestq since you seem to be working on these issues and PRs :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2099/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2099/timeline | null | completed | null | null | false | [
"Hi !\r\nCan you share more information about the features of your dataset ? You can get them by printing `my_dataset.features`\r\nCan you also share the code of your `map` function ?",
"It is actually just the tokenized `wikipedia` dataset with `input_ids`, `attention_mask`, etc, with one extra column which is a list of integers. The `text` column is removed during tokenization.\r\n\r\n```\r\ndef add_len_and_seq(example):\r\n end_idx = example['input_ids'].index(SEP)\r\n example['actual_len'] = end_idx-1\r\n seq_len = len(example['input_ids'])\r\n \r\n\r\n example['seq'] = [PAD_ID] + [np.uint8(example['some_integer'])]*(end_idx-1) + [PAD_ID]*(seq_len-end_idx)\r\n \r\n return example\r\n```\r\n",
"Is `PAD_ID` a python integer ? You need all the integers in `example['seq']` to have the same type.\r\nDoes this work if you remove the `np.uint8` and use python integers instead ?",
"yup I casted it to `np.uint8` outside the function where it was defined. It was originally using python integers.",
"Strangely, even when I manually created `np.arrays` of specific `dtypes`, the types in the final `dataset_info.json` that gets written are still `int64`.\r\n\r\nUpdate: I tried creating lists of `int8`s and got the same result.",
"Yes this is a known issue: #625 \r\nWe're working on making the precision kept for numpy :)\r\nTo specify the precision of the integers, currently one needs to specify the output features with `.map(..., features=output_features)`",
"Do you know what step is taking forever in the code ?\r\nWhat happens if you interrupt the execution of the dataset loading ?",
"After a synchronous discussion, we found that the cache file sizes have an enormous effect on the loading speed: smaller cache files result in faster load times. `num_proc` controls the number of cache files that are being written and is inversely proportional to the individual file size. In other words, increase `num_proc` for smaller cache files :)\r\n\r\nMaybe this can be highlighted somewhere in the docs."
] |
https://api.github.com/repos/huggingface/datasets/issues/4766 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4766/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4766/comments | https://api.github.com/repos/huggingface/datasets/issues/4766/events | https://github.com/huggingface/datasets/issues/4766 | 1,321,809,380 | I_kwDODunzps5OyTXk | 4,766 | Dataset Viewer issue for openclimatefix/goes-mrms | [] | open | false | null | 1 | 2022-07-29T06:17:14Z | 2022-07-29T08:43:58Z | null | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4766/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4766/timeline | null | null | null | null | false | [
"Thanks for reporting, @cheaterHy.\r\n\r\nThe cause of this issue is a misalignment between the names of the repo (`goes-mrms`, with hyphen) and its Python loading scrip file (`goes_mrms.py`, with underscore).\r\n\r\nI've opened an Issue discussion in their repo: https://huggingface.co/datasets/openclimatefix/goes-mrms/discussions/1"
] |
https://api.github.com/repos/huggingface/datasets/issues/783 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/783/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/783/comments | https://api.github.com/repos/huggingface/datasets/issues/783/events | https://github.com/huggingface/datasets/pull/783 | 733,536,254 | MDExOlB1bGxSZXF1ZXN0NTEzMzAwODUz | 783 | updated links to v1.3 of quail, fixed the description | [] | closed | false | null | 1 | 2020-10-30T21:47:33Z | 2020-11-29T23:05:19Z | 2020-11-29T23:05:18Z | null | updated links to v1.3 of quail, fixed the description | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/783/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/783/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/783.diff",
"html_url": "https://github.com/huggingface/datasets/pull/783",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/783.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/783"
} | true | [
"we're using quail 1.3 now thanks.\r\nclosing this one"
] |
https://api.github.com/repos/huggingface/datasets/issues/1226 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1226/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1226/comments | https://api.github.com/repos/huggingface/datasets/issues/1226/events | https://github.com/huggingface/datasets/pull/1226 | 758,036,979 | MDExOlB1bGxSZXF1ZXN0NTMzMjc2OTU3 | 1,226 | Add menyo_20k_mt dataset | [] | closed | false | null | 2 | 2020-12-06T22:16:15Z | 2020-12-10T19:22:14Z | 2020-12-10T19:22:14Z | null | Add menyo_20k_mt dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1226/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1226/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1226.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1226",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1226.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1226"
} | true | [
"looks like your PR includes changes about many other files than the ones for menyo 20k mt\r\nCan you create another branch and another PR please ?",
"Yes, I will"
] |
https://api.github.com/repos/huggingface/datasets/issues/4634 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4634/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4634/comments | https://api.github.com/repos/huggingface/datasets/issues/4634/events | https://github.com/huggingface/datasets/issues/4634 | 1,294,405,251 | I_kwDODunzps5NJw6D | 4,634 | Can't load the Hausa audio dataset | [] | closed | false | null | 1 | 2022-07-05T14:47:36Z | 2022-09-13T14:07:32Z | 2022-09-13T14:07:32Z | null | common_voice_train = load_dataset("common_voice", "ha", split="train+validation") | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4634/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4634/timeline | null | completed | null | null | false | [
"Could you provide the error details. It is difficult to debug otherwise. Also try other config. `ha` is not a valid."
] |
https://api.github.com/repos/huggingface/datasets/issues/119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/119/comments | https://api.github.com/repos/huggingface/datasets/issues/119/events | https://github.com/huggingface/datasets/issues/119 | 618,652,145 | MDU6SXNzdWU2MTg2NTIxNDU= | 119 | 🐛 Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array' | [] | closed | false | null | 2 | 2020-05-15T02:27:26Z | 2020-05-15T05:11:22Z | 2020-05-15T02:45:28Z | null | I'm trying to load CNN/DM dataset on Colab.
[Colab notebook](https://colab.research.google.com/drive/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing)
But I meet this error :
> AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/119/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/119/timeline | null | completed | null | null | false | [
"It's strange, after installing `nlp` on Colab, the `pyarrow` version seems fine from `pip` but not from python :\r\n\r\n```python\r\nimport pyarrow\r\n\r\n!pip show pyarrow\r\nprint(\"version = {}\".format(pyarrow.__version__))\r\n```\r\n\r\n> Name: pyarrow\r\nVersion: 0.17.0\r\nSummary: Python library for Apache Arrow\r\nHome-page: https://arrow.apache.org/\r\nAuthor: None\r\nAuthor-email: None\r\nLicense: Apache License, Version 2.0\r\nLocation: /usr/local/lib/python3.6/dist-packages\r\nRequires: numpy\r\nRequired-by: nlp, feather-format\r\n> \r\n> version = 0.14.1",
"Ok I just had to restart the runtime after installing `nlp`. After restarting, the version of `pyarrow` is fine."
] |
https://api.github.com/repos/huggingface/datasets/issues/2703 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2703/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2703/comments | https://api.github.com/repos/huggingface/datasets/issues/2703/events | https://github.com/huggingface/datasets/issues/2703 | 950,482,284 | MDU6SXNzdWU5NTA0ODIyODQ= | 2,703 | Bad message when config name is missing | [] | closed | false | null | 0 | 2021-07-22T09:47:23Z | 2021-07-22T10:02:40Z | 2021-07-22T10:02:40Z | null | When loading a dataset that have several configurations, we expect to see an error message if the user doesn't specify a config name.
However in `datasets` 1.10.0 and 1.10.1 it doesn't show the right message:
```python
import datasets
datasets.load_dataset("glue")
```
raises
```python
AttributeError: 'BuilderConfig' object has no attribute 'text_features'
```
instead of
```python
ValueError: Config name is missing.
Please pick one among the available configs: ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax']
Example of usage:
`load_dataset('glue', 'cola')`
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2703/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2703/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5786 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5786/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5786/comments | https://api.github.com/repos/huggingface/datasets/issues/5786/events | https://github.com/huggingface/datasets/issues/5786 | 1,680,957,070 | I_kwDODunzps5kMV6O | 5,786 | Multiprocessing in a `filter` or `map` function with a Pytorch model | [] | closed | false | null | 5 | 2023-04-24T10:38:07Z | 2023-05-30T09:56:30Z | 2023-04-24T10:43:58Z | null | ### Describe the bug
I am trying to use a Pytorch model loaded on CPUs with multiple processes with a `.map` or a `.filter` method.
Usually, when dealing with models that are non-pickable, creating a class such that the `map` function is the method `__call__`, and adding `reduce` helps to solve the problem.
However, here, the command hangs without throwing an error.
### Steps to reproduce the bug
```
from datasets import Dataset
import torch
from torch import nn
from torchvision import models
class FilterFunction:
#__slots__ = ("path_model", "model") # Doesn't change anything uncommented
def __init__(self, path_model):
self.path_model = path_model
model = models.resnet50()
model.fc = nn.Sequential(
nn.Linear(2048, 512),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(512, 10),
nn.LogSoftmax(dim=1)
)
model.load_state_dict(torch.load(path_model, map_location=torch.device("cpu")))
model.eval()
self.model = model
def __call__(self, batch):
return [True] * len(batch["id"])
# Comment this to have an error
def __reduce__(self):
return (self.__class__, (self.path_model,))
dataset = Dataset.from_dict({"id": [0, 1, 2, 4]})
# Download (100 MB) at https://github.com/emiliantolo/pytorch_nsfw_model/raw/master/ResNet50_nsfw_model.pth
path_model = "/fsx/hugo/nsfw_image/ResNet50_nsfw_model.pth"
filter_function = FilterFunction(path_model=path_model)
# Works
filtered_dataset = dataset.filter(filter_function, num_proc=1, batched=True, batch_size=2)
# Doesn't work
filtered_dataset = dataset.filter(filter_function, num_proc=2, batched=True, batch_size=2)
```
### Expected behavior
The command `filtered_dataset = dataset.filter(filter_function, num_proc=2, batched=True, batch_size=2)` should work and not hang.
### Environment info
Datasets: 2.11.0
Pyarrow: 11.0.0
Ubuntu | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5786/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5786/timeline | null | completed | null | null | false | [
"Hi ! PyTorch may hang when calling `load_state_dict()` in a subprocess. To fix that, set the multiprocessing start method to \"spawn\". Since `datasets` uses `multiprocess`, you should do:\r\n\r\n```python\r\n# Required to avoid issues with pytorch (otherwise hangs during load_state_dict in multiprocessing)\r\nimport multiprocess.context as ctx\r\nctx._force_start_method('spawn')\r\n```\r\n\r\nAlso make sure to run your main code in `if __name__ == \"__main__\":` to avoid issues with python multiprocesing",
"Thanks!",
"@lhoestq Hello, I also encountered this problem but maybe with another reason. Here is my code:\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir, model_max_length=training_args.model_max_length)\r\ndata = load_dataset(\"json\", data_files=data_args.train_file, cache_dir=data_args.data_cache_dir)\r\ndef func(samples):\r\n # main operation\r\n for sentence_value in samples:\r\n sentence_ids = tokenizer.encode(sentence_value, add_special_tokens=False, max_length=tokenizer.model_max_length, truncation=True)\r\n ... ...\r\ntrain_data = data[\"train\"].shuffle().map(func, num_proc=os.cpu_count())\r\n```\r\nIt hangs after the progress reaches 100%. Could you help me point out the reason?",
"@SkyAndCloud your issue doesn't seem related to the original post - could you open a new issue and provide more details ? (size of the dataset, number of cpus, how much time it took to run, `datasets` version)",
"@lhoestq Hi, I just solved this problem. Because the input is extremely long and the tokenizer requests a large amount of memory, which leads to a OOM error and may eventually causes the hang. I didn't filter those too-long sentences because I thought `tokenizer` would stop once the length exceeds the `max_length`. However, it actually firstly complete the tokenization of entire sentence and then truncate it."
] |
https://api.github.com/repos/huggingface/datasets/issues/4161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4161/comments | https://api.github.com/repos/huggingface/datasets/issues/4161/events | https://github.com/huggingface/datasets/pull/4161 | 1,203,230,485 | PR_kwDODunzps42LEhi | 4,161 | Add Visual Genome | [] | closed | false | null | 4 | 2022-04-13T12:25:24Z | 2022-04-21T15:42:49Z | 2022-04-21T13:08:52Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4161/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4161/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4161.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4161",
"merged_at": "2022-04-21T13:08:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4161.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4161"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hum there seems to be some issues with tasks in test:\r\n - some tasks don't fit anything in `tasks.json`. Do I remove them in `task_categories`?\r\n - some tasks should exist, typically `visual-question-answering` (https://github.com/huggingface/datasets/blame/9f2ff14673cac1f1ad56d80221a793f5938b68c7/src/datasets/utils/resources/tasks.json#L195) yet the exception is failing on me. I'm guessing it's because my `master` is not up-to-date. However this means that the testing only tests my branch instead of the one merged with master?\r\n \r\n cc @mariosasko @lhoestq ",
"> some tasks don't fit anything in tasks.json. Do I remove them in task_categories?\r\n\r\nYou can keep them, but add `other-` as a prefix to those tasks to make the CI ignore it\r\n\r\n> some tasks should exist, typically visual-question-answering (https://github.com/huggingface/datasets/blame/9f2ff14673cac1f1ad56d80221a793f5938b68c7/src/datasets/utils/resources/tasks.json#L195) yet the exception is failing on me. I'm guessing it's because my master is not up-to-date. However this means that the testing only tests my branch instead of the one merged with master?\r\n\r\nFeel free to merge upstream/master into your branch ;)\r\n\r\nEDIT: actually I just noticed you've already done this, thanks !",
"After offline discussions: will keep that image essentially it's necessary as I have a mapping that creates a mapping between url and local path (images are downloaded via a zip file) and dummy data needs to store that dummy image. The issue is when I read an annotation, I get a url, compute the local path, and basically I assume the local path exists since I've extracted all the images ... This isn't true if dummy data doesn't have all the images, so instead I've added a script that \"fixes\" the dummy data after using the CLI, it essentially adds the dummy image in the zip corresponding to the url."
] |
https://api.github.com/repos/huggingface/datasets/issues/4598 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4598/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4598/comments | https://api.github.com/repos/huggingface/datasets/issues/4598/events | https://github.com/huggingface/datasets/pull/4598 | 1,288,774,514 | PR_kwDODunzps46kfOS | 4,598 | Host financial_phrasebank data on the Hub | [] | closed | false | null | 1 | 2022-06-29T13:59:31Z | 2022-07-01T09:41:14Z | 2022-07-01T09:29:36Z | null |
Fix #4597. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4598/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4598/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4598.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4598",
"merged_at": "2022-07-01T09:29:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4598.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4598"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3964/comments | https://api.github.com/repos/huggingface/datasets/issues/3964/events | https://github.com/huggingface/datasets/issues/3964 | 1,173,564,993 | I_kwDODunzps5F8y5B | 3,964 | Add default Audio Loader | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2022-03-18T12:58:55Z | 2022-08-22T14:20:46Z | 2022-08-22T14:20:46Z | null | **Is your feature request related to a problem? Please describe.**
Writing a custom loading dataset script might be a bit challenging for users.
**Describe the solution you'd like**
Add default Audio loader (analogous to ImageFolder) for small datasets with standard directory structure.
**Describe alternatives you've considered**
Create a custom loading script? that's what users doing now.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3964/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3964/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1686/comments | https://api.github.com/repos/huggingface/datasets/issues/1686/events | https://github.com/huggingface/datasets/issues/1686 | 778,921,684 | MDU6SXNzdWU3Nzg5MjE2ODQ= | 1,686 | Dataset Error: DaNE contains empty samples at the end | [] | closed | false | null | 3 | 2021-01-05T11:54:26Z | 2021-01-05T14:01:09Z | 2021-01-05T14:00:13Z | null | The dataset DaNE, contains empty samples at the end. It is naturally easy to remove using a filter but should probably not be there, to begin with as it can cause errors.
```python
>>> import datasets
[...]
>>> dataset = datasets.load_dataset("dane")
[...]
>>> dataset["test"][-1]
{'dep_ids': [], 'dep_labels': [], 'lemmas': [], 'morph_tags': [], 'ner_tags': [], 'pos_tags': [], 'sent_id': '', 'text': '', 'tok_ids': [], 'tokens': []}
>>> dataset["train"][-1]
{'dep_ids': [], 'dep_labels': [], 'lemmas': [], 'morph_tags': [], 'ner_tags': [], 'pos_tags': [], 'sent_id': '', 'text': '', 'tok_ids': [], 'tokens': []}
```
Best,
Kenneth | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1686/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1686/timeline | null | completed | null | null | false | [
"Thanks for reporting, I opened a PR to fix that",
"One the PR is merged the fix will be available in the next release of `datasets`.\r\n\r\nIf you don't want to wait the next release you can still load the script from the master branch with\r\n\r\n```python\r\nload_dataset(\"dane\", script_version=\"master\")\r\n```",
"If you have other questions feel free to reopen :) "
] |
https://api.github.com/repos/huggingface/datasets/issues/1075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1075/comments | https://api.github.com/repos/huggingface/datasets/issues/1075/events | https://github.com/huggingface/datasets/pull/1075 | 756,501,235 | MDExOlB1bGxSZXF1ZXN0NTMyMDM4ODg1 | 1,075 | adding cleaned verion of E2E NLG | [] | closed | false | null | 0 | 2020-12-03T19:21:07Z | 2020-12-03T19:43:56Z | 2020-12-03T19:43:56Z | null | Found at: https://github.com/tuetschek/e2e-cleaning | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1075/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1075/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1075.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1075",
"merged_at": "2020-12-03T19:43:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1075.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1075"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4984/comments | https://api.github.com/repos/huggingface/datasets/issues/4984/events | https://github.com/huggingface/datasets/pull/4984 | 1,375,690,330 | PR_kwDODunzps4_FhTm | 4,984 | docs: ✏️ add links to the Datasets API | [] | closed | false | null | 2 | 2022-09-16T09:34:12Z | 2022-09-16T13:10:14Z | 2022-09-16T13:07:33Z | null | I added some links to the Datasets API in the docs. See https://github.com/huggingface/datasets-server/pull/566 for a companion PR in the datasets-server. The idea is to improve the discovery of the API through the docs.
I'm a bit shy about pasting a lot of links to the API in the docs, so it's minimal for now. I'm interested in ideas to integrate the API better in these docs without being too much. cc @lhoestq @julien-c @albertvillanova @stevhliu. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4984/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4984/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4984.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4984",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4984.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4984"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"OK, thanks @lhoestq. I'll close this PR, and come back to it with @stevhliu once we work on https://github.com/huggingface/datasets-server/issues/568"
] |
https://api.github.com/repos/huggingface/datasets/issues/2880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2880/comments | https://api.github.com/repos/huggingface/datasets/issues/2880/events | https://github.com/huggingface/datasets/pull/2880 | 990,877,940 | MDExOlB1bGxSZXF1ZXN0NzI5NDIzMDMy | 2,880 | Extend support for streaming datasets that use pathlib.Path stem/suffix | [] | closed | false | null | 0 | 2021-09-08T08:42:43Z | 2021-09-09T13:13:29Z | 2021-09-09T13:13:29Z | null | This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the properties `pathlib.Path.stem` and `pathlib.Path.suffix`.
Related to #2876, #2874, #2866.
CC: @severo | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2880/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2880/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2880.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2880",
"merged_at": "2021-09-09T13:13:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2880.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2880"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2739/comments | https://api.github.com/repos/huggingface/datasets/issues/2739/events | https://github.com/huggingface/datasets/pull/2739 | 957,751,260 | MDExOlB1bGxSZXF1ZXN0NzAxMTI0ODQ3 | 2,739 | Pass tokenize to sacrebleu only if explicitly passed by user | [] | closed | false | null | 0 | 2021-08-02T05:09:05Z | 2021-08-03T04:23:37Z | 2021-08-03T04:23:37Z | null | Next `sacrebleu` release (v2.0.0) will remove `sacrebleu.DEFAULT_TOKENIZER`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15
This PR passes `tokenize` to `sacrebleu` only if explicitly passed by the user, otherwise it will not pass it (and `sacrebleu` will use its default, no matter where it is and how it is called).
Close: #2737. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2739/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2739/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2739.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2739",
"merged_at": "2021-08-03T04:23:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2739.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2739"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5823 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5823/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5823/comments | https://api.github.com/repos/huggingface/datasets/issues/5823/events | https://github.com/huggingface/datasets/issues/5823 | 1,697,024,789 | I_kwDODunzps5lJosV | 5,823 | [2.12.0] DatasetDict.save_to_disk not saving to S3 | [] | closed | false | null | 2 | 2023-05-05T05:22:59Z | 2023-05-05T15:01:18Z | 2023-05-05T15:01:17Z | null | ### Describe the bug
When trying to save a `DatasetDict` to a private S3 bucket using `save_to_disk`, the artifacts are instead saved locally, and not in the S3 bucket.
I have tried using the deprecated `fs` as well as the `storage_options` arguments and I get the same results.
### Steps to reproduce the bug
1. Create a DatsetDict `dataset`
2. Create a S3FileSystem object
`s3 = datasets.filesystems.S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key)`
3. Save using `dataset_dict.save_to_disk(f"{s3_bucket}/{s3_dir}/{dataset_name}", storage_options=s3.storage_options)` or `dataset_dict.save_to_disk(f"{s3_bucket}/{s3_dir}/{dataset_name}", fs=s3)`
4. Check the corresponding S3 bucket and verify nothing has been uploaded
5. Check the path at f"{s3_bucket}/{s3_dir}/{dataset_name}" and verify that files have been saved there
### Expected behavior
Artifacts are uploaded at the f"{s3_bucket}/{s3_dir}/{dataset_name}" S3 location.
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-13.3.1-x86_64-i386-64bit
- Python version: 3.11.2
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5823/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5823/timeline | null | completed | null | null | false | [
"Hi ! Can you try adding the `s3://` prefix ?\r\n```python\r\nf\"s3://{s3_bucket}/{s3_dir}/{dataset_name}\"\r\n```",
"Ugh, yeah that was it. Thank you!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4344/comments | https://api.github.com/repos/huggingface/datasets/issues/4344/events | https://github.com/huggingface/datasets/pull/4344 | 1,234,882,542 | PR_kwDODunzps43xFEn | 4,344 | Fix docstring in DatasetDict::shuffle | [] | closed | false | null | 0 | 2022-05-13T08:06:00Z | 2022-05-25T09:23:43Z | 2022-05-24T15:35:21Z | null | I think due to #1626, the docstring contained this error ever since `seed` was added. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4344/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4344/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4344.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4344",
"merged_at": "2022-05-24T15:35:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4344.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4344"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4133/comments | https://api.github.com/repos/huggingface/datasets/issues/4133/events | https://github.com/huggingface/datasets/issues/4133 | 1,197,830,623 | I_kwDODunzps5HZXHf | 4,133 | HANS dataset preview broken | [
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | 3 | 2022-04-08T21:06:15Z | 2022-04-13T11:57:34Z | 2022-04-13T11:57:34Z | null | ## Dataset viewer issue for '*hans*'
**Link:** [https://huggingface.co/datasets/hans](https://huggingface.co/datasets/hans)
HANS dataset preview is broken with error 400
Am I the one who added this dataset ? No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4133/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4133/timeline | null | completed | null | null | false | [
"The dataset cannot be loaded, be it in normal or streaming mode.\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset=datasets.load_dataset(\"hans\", split=\"train\", streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/hans/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac/hans.py\", line 121, in _generate_examples\r\n for idx, line in enumerate(open(filepath, \"rb\")):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1595, in __next__\r\n out = self.readline()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1592, in readline\r\n return self.readuntil(b\"\\n\")\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1581, in readuntil\r\n self.seek(start + found + len(char))\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 676, in seek\r\n raise ValueError(\"Cannot seek streaming HTTP file\")\r\nValueError: Cannot seek streaming HTTP file\r\n>>> dataset=datasets.load_dataset(\"hans\", split=\"train\", streaming=False)\r\nDownloading and preparing dataset hans/plain_text (download: 29.51 MiB, generated: 30.34 MiB, post-processed: Unknown size, total: 59.85 MiB) to /home/slesage/.cache/huggingface/datasets/hans/plain_text/1.0.0/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1687, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1104, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 694, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1087, in _prepare_split\r\n for key, record in logging.tqdm(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/tqdm/std.py\", line 1180, in __iter__\r\n for obj in iterable:\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/hans/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac/hans.py\", line 121, in _generate_examples\r\n for idx, line in enumerate(open(filepath, \"rb\")):\r\nValueError: readline of closed file\r\n```\r\n\r\n",
"Hi! I've opened a PR that should make this dataset stremable. You can test it as follows:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"hans\", split=\"train\", streaming=True, revision=\"49decd29839c792ecc24ac88f861cbdec30c1c40\")\r\n```\r\n\r\n@severo The current script doesn't throw an error in normal mode (only in streaming mode) on my local machine or in Colab. Can you update your installation of `datasets` and see if that fixes the issue?",
"Thanks for this. It works well, thanks! The dataset viewer is using https://github.com/huggingface/datasets/releases/tag/2.0.0, I'm eager to upgrade to 2.0.1 😉"
] |
https://api.github.com/repos/huggingface/datasets/issues/3060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3060/comments | https://api.github.com/repos/huggingface/datasets/issues/3060/events | https://github.com/huggingface/datasets/issues/3060 | 1,022,936,396 | I_kwDODunzps48-MVM | 3,060 | load_dataset('openwebtext') yields "Compressed file ended before the end-of-stream marker was reached" | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-10-11T17:05:27Z | 2021-10-28T05:52:21Z | 2021-10-28T05:52:21Z | null | ## Describe the bug
When I try `load_dataset('openwebtext')`, I receive a "EOFError: Compressed file ended before the end-of-stream marker was reached" error.
## Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset('openwebtext')
```
## Expected results
I expect the `dataset` variable to be properly constructed.
## Actual results
```
File "/home/rschaef/CoCoSci-Language-Distillation/distillation_v2/ratchet_learning/tasks/base.py", line 37, in create_dataset
dataset_str,
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/load.py", line 1117, in load_dataset
use_auth_token=use_auth_token,
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/builder.py", line 637, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/builder.py", line 704, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/rschaef/.cache/huggingface/modules/datasets_modules/datasets/openwebtext/85b3ae7051d2d72e7c5fdf6dfb462603aaa26e9ed506202bf3a24d261c6c40a1/openwebtext.py", line 61, in _split_generators
dl_dir = dl_manager.download_and_extract(_URL)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 261, in extract
partial(cached_path, download_config=download_config), path_or_paths, num_proc=num_proc, disable_tqdm=False
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 197, in map_nested
return function(data_struct)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 316, in cached_path
output_path, force_extract=download_config.force_extract
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 40, in extract
self.extractor.extract(input_path, output_path, extractor=extractor)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 179, in extract
return extractor.extract(input_path, output_path)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 53, in extract
tar_file.extractall(output_path)
File "/usr/lib/python3.6/tarfile.py", line 2010, in extractall
numeric_owner=numeric_owner)
File "/usr/lib/python3.6/tarfile.py", line 2052, in extract
numeric_owner=numeric_owner)
File "/usr/lib/python3.6/tarfile.py", line 2122, in _extract_member
self.makefile(tarinfo, targetpath)
File "/usr/lib/python3.6/tarfile.py", line 2171, in makefile
copyfileobj(source, target, tarinfo.size, ReadError, bufsize)
File "/usr/lib/python3.6/tarfile.py", line 249, in copyfileobj
buf = src.read(bufsize)
File "/usr/lib/python3.6/lzma.py", line 200, in read
return self._buffer.read(size)
File "/usr/lib/python3.6/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "/usr/lib/python3.6/_compression.py", line 99, in read
raise EOFError("Compressed file ended before the "
python-BaseException
EOFError: Compressed file ended before the end-of-stream marker was reached
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-4.4.0-173-generic-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.10
- PyArrow version: 5.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3060/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3060/timeline | null | completed | null | null | false | [
"Hi @RylanSchaeffer, thanks for reporting.\r\n\r\nI'm sorry, but I was not able to reproduce your problem.\r\n\r\nNormally, the reason for this type of error is that, during your download of the data files, this was not fully complete.\r\n\r\nCould you please try to load the dataset again but forcing its redownload? Please use:\r\n```python\r\ndataset = load_dataset(\"openwebtext\", download_mode=\"FORCE_REDOWNLOAD\")\r\n```\r\n\r\nLet me know if the problem persists.",
"I close this issue for the moment. Feel free to re-open it again if the problem persists."
] |
https://api.github.com/repos/huggingface/datasets/issues/1528 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1528/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1528/comments | https://api.github.com/repos/huggingface/datasets/issues/1528/events | https://github.com/huggingface/datasets/pull/1528 | 764,724,035 | MDExOlB1bGxSZXF1ZXN0NTM4NjU0ODU0 | 1,528 | initial commit for Common Crawl Domain Names | [] | closed | false | null | 1 | 2020-12-13T01:32:49Z | 2020-12-18T13:51:38Z | 2020-12-18T10:22:32Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1528/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1528/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1528.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1528",
"merged_at": "2020-12-18T10:22:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1528.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1528"
} | true | [
"Thank you :)"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/4341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4341/comments | https://api.github.com/repos/huggingface/datasets/issues/4341/events | https://github.com/huggingface/datasets/issues/4341 | 1,234,739,703 | I_kwDODunzps5JmKH3 | 4,341 | Failing CI on Windows for sari and wiki_split metrics | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2022-05-13T04:55:17Z | 2022-05-13T05:47:41Z | 2022-05-13T05:47:41Z | null | ## Describe the bug
Our CI is failing from yesterday on Windows for metrics: sari and wiki_split
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_sari - ...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_wiki_split
```
See: https://app.circleci.com/pipelines/github/huggingface/datasets/11928/workflows/79daa5e7-65c9-4e85-829b-00d2bfbd076a/jobs/71594 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4341/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4341/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5846/comments | https://api.github.com/repos/huggingface/datasets/issues/5846/events | https://github.com/huggingface/datasets/issues/5846 | 1,706,289,290 | I_kwDODunzps5ls-iK | 5,846 | load_dataset('bigcode/the-stack-dedup', streaming=True) very slow! | [] | open | false | null | 4 | 2023-05-11T17:58:57Z | 2023-05-16T03:23:46Z | null | null | ### Describe the bug
Running
```
import datasets
ds = datasets.load_dataset('bigcode/the-stack-dedup', streaming=True)
```
takes about 2.5 minutes!
I would expect this to be near instantaneous. With other datasets, the runtime is one or two seconds.
### Environment info
- `datasets` version: 2.11.0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.10
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5846/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5846/timeline | null | null | null | null | false | [
"This is due to the slow resolution of the data files: https://github.com/huggingface/datasets/issues/5537.\r\n\r\nWe plan to switch to `huggingface_hub`'s `HfFileSystem` soon to make the resolution faster (will be up to 20x faster once we merge https://github.com/huggingface/huggingface_hub/pull/1443)\r\n\r\n",
"You're right, when I try to parse more than 50GB of text data, I also get very slow, usually taking hours or even tens of hours.",
"> You're right, when I try to parse more than 50GB of text data, I also get very slow, usually taking hours or even tens of hours.\r\n\r\nThat's unrelated to the problem discussed in this issue. ",
"> > You're right, when I try to parse more than 50GB of text data, I also get very slow, usually taking hours or even tens of hours.\r\n> \r\n> That's unrelated to the problem discussed in this issue.\r\n\r\nSorry, I misunderstood it."
] |
https://api.github.com/repos/huggingface/datasets/issues/3855 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3855/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3855/comments | https://api.github.com/repos/huggingface/datasets/issues/3855/events | https://github.com/huggingface/datasets/issues/3855 | 1,162,448,589 | I_kwDODunzps5FSY7N | 3,855 | Bad error message when loading private dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-03-08T09:55:17Z | 2022-07-11T15:06:40Z | 2022-07-11T15:06:40Z | null | ## Describe the bug
A pretty common behavior of an interaction between the Hub and datasets is the following.
An organization adds a dataset in private mode and wants to load it afterward.
```python
from transformers import load_dataset
ds = load_dataset("NewT5/dummy_data", "dummy")
```
This command then fails with:
```bash
FileNotFoundError: Couldn't find a dataset script at /home/patrick/NewT5/dummy_data/dummy_data.py or any data file in the same directory. Couldn't find 'NewT5/dummy_data' on the Hugging Face Hub either: FileNotFoundError: Dataset 'NewT5/dummy_data' doesn't exist on the Hub
```
**even though** the user has access to the website `NewT5/dummy_data` since she/he is part of the org.
We need to improve the error message here similar to how @sgugger, @LysandreJik and @julien-c have done it for transformers IMO.
## Steps to reproduce the bug
E.g. execute the following code to see the different error messages between `transformes` and `datasets`.
1. Transformers
```python
from transformers import BertModel
BertModel.from_pretrained("NewT5/dummy_model")
```
The error message is clearer here - it gives:
```
OSError: patrickvonplaten/gpt2-xl is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
Let's maybe do the same for datasets? The PR was introduced to `transformers` here:
https://github.com/huggingface/transformers/pull/15261
## Expected results
Better error message
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4.dev0
- Platform: Linux-5.15.15-76051515-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3855/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3855/timeline | null | completed | null | null | false | [
"We raise the error “ FileNotFoundError: can’t find the dataset” mainly to follow best practice in security (otherwise users could be able to guess what private repositories users/orgs may have)\r\n\r\nWe can indeed reformulate this and add the \"If this is a private repository,...\" part !",
"Resolved via https://github.com/huggingface/datasets/pull/4536"
] |
https://api.github.com/repos/huggingface/datasets/issues/3748 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3748/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3748/comments | https://api.github.com/repos/huggingface/datasets/issues/3748/events | https://github.com/huggingface/datasets/pull/3748 | 1,142,128,763 | PR_kwDODunzps4zCEyM | 3,748 | Add tqdm arguments | [] | closed | false | null | 0 | 2022-02-18T00:47:55Z | 2022-02-18T00:59:15Z | 2022-02-18T00:59:15Z | null | In this PR, there are two changes.
1. It is able to show the progress bar by adding the length of the iterator.
2. Pass in tqdm_kwargs so that can enable more feasibility for the control of tqdm library. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3748/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3748/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3748.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3748",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3748.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3748"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4693 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4693/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4693/comments | https://api.github.com/repos/huggingface/datasets/issues/4693/events | https://github.com/huggingface/datasets/pull/4693 | 1,306,788,322 | PR_kwDODunzps47go-F | 4,693 | update `samsum` script | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 2 | 2022-07-16T11:53:05Z | 2022-09-23T11:40:11Z | 2022-09-23T11:37:57Z | null | update `samsum` script after #4672 was merged (citation is also updated) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4693/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4693/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4693.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4693",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4693.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4693"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"We are closing PRs to dataset scripts because we are moving them to the Hub.\r\n\r\nThanks anyway.\r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/960/comments | https://api.github.com/repos/huggingface/datasets/issues/960/events | https://github.com/huggingface/datasets/pull/960 | 754,422,710 | MDExOlB1bGxSZXF1ZXN0NTMwMzI1MzUx | 960 | Add code to automate parts of the dataset card | [] | closed | false | null | 0 | 2020-12-01T14:04:51Z | 2021-04-26T07:56:01Z | 2021-04-26T07:56:01Z | null | Most parts of the "Dataset Structure" section can be generated automatically. This PR adds some code to do so. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/960/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/960/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/960.diff",
"html_url": "https://github.com/huggingface/datasets/pull/960",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/960.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/960"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6058/comments | https://api.github.com/repos/huggingface/datasets/issues/6058/events | https://github.com/huggingface/datasets/issues/6058 | 1,815,131,397 | I_kwDODunzps5sMLUF | 6,058 | laion-coco download error | [] | closed | false | null | 1 | 2023-07-21T04:24:15Z | 2023-07-22T01:42:06Z | 2023-07-22T01:42:06Z | null | ### Describe the bug
The full trace:
```
/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py:1744: FutureWarning: 'ignore_verifications' was de
precated in favor of 'verification_mode' in version 2.9.1 and will be removed in 3.0.0.
You can remove this warning by passing 'verification_mode=no_checks' instead.
warnings.warn(
Downloading and preparing dataset parquet/laion--laion-coco to /home/bian/.cache/huggingface/datasets/laion___parquet/laion--
laion-coco-cb4205d7f1863066/0.0.0/bcacc8bdaa0614a5d73d0344c813275e590940c6ea8bc569da462847103a1afd...
Downloading data: 100%|█| 1.89G/1.89G [04:57<00:00,
Downloading data files: 100%|█| 1/1 [04:59<00:00, 2
Extracting data files: 100%|█| 1/1 [00:00<00:00, 13
Generating train split: 0 examples [00:00, ? examples/s]<_io.BufferedReader
name='/home/bian/.cache/huggingface/datasets/downlo
ads/26d7a016d25bbd9443115cfa3092136e8eb2f1f5bcd4154
0cb9234572927f04c'>
Traceback (most recent call last):
File "/home/bian/data/ZOC/download_laion_coco.py", line 4, in <module>
dataset = load_dataset("laion/laion-coco", ignore_verifications=True)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 1842, in _prepare_split_single
generator = self._generate_tables(**gen_kwargs)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 67, in
_generate_tables
parquet_file = pq.ParquetFile(f)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/pyarrow/parquet/core.py", line 323, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1227, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file
.
```
I have carefully followed the instructions in #5264 but still get the same error.
Other helpful information:
```
ds = load_dataset("parquet", data_files=
...: "https://huggingface.co/datasets/laion/l
...: aion-coco/resolve/d22869de3ccd39dfec1507
...: f7ded32e4a518dad24/part-00000-2256f782-1
...: 26f-4dc6-b9c6-e6757637749d-c000.snappy.p
...: arquet")
Found cached dataset parquet (/home/bian/.cache/huggingface/datasets/parquet/default-a02eea00aeb08b0e/0.0.0/bb8ccf89d9ee38581ff5e51506d721a9b37f14df8090dc9b2d8fb4a40957833f)
100%|██████████████| 1/1 [00:00<00:00, 4.55it/s]
```
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("laion/laion-coco", ignore_verifications=True/False)
```
### Expected behavior
Properly load Laion-coco dataset
### Environment info
datasets==2.11.0 torch==1.12.1 python 3.10 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6058/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6058/timeline | null | completed | null | null | false | [
"This can also mean one of the files was not downloaded correctly.\r\n\r\nWe log an erroneous file's name before raising the reader's error, so this is how you can find the problematic file. Then, you should delete it and call `load_dataset` again.\r\n\r\n(I checked all the uploaded files, and they seem to be valid Parquet files, so I don't think this is a bug on their side)\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/839/comments | https://api.github.com/repos/huggingface/datasets/issues/839/events | https://github.com/huggingface/datasets/issues/839 | 740,355,270 | MDU6SXNzdWU3NDAzNTUyNzA= | 839 | XSum dataset missing spaces between sentences | [] | open | false | null | 0 | 2020-11-11T00:34:43Z | 2020-11-11T00:34:43Z | null | null | I noticed that the XSum dataset has no space between sentences. This could lead to worse results for anyone training or testing on it. Here's an example (0th entry in the test set):
`The London trio are up for best UK act and best album, as well as getting two nominations in the best song category."We got told like this morning 'Oh I think you're nominated'", said Dappy."And I was like 'Oh yeah, which one?' And now we've got nominated for four awards. I mean, wow!"Bandmate Fazer added: "We thought it's best of us to come down and mingle with everyone and say hello to the cameras. And now we find we've got four nominations."The band have two shots at the best song prize, getting the nod for their Tynchy Stryder collaboration Number One, and single Strong Again.Their album Uncle B will also go up against records by the likes of Beyonce and Kanye West.N-Dubz picked up the best newcomer Mobo in 2007, but female member Tulisa said they wouldn't be too disappointed if they didn't win this time around."At the end of the day we're grateful to be where we are in our careers."If it don't happen then it don't happen - live to fight another day and keep on making albums and hits for the fans."Dappy also revealed they could be performing live several times on the night.The group will be doing Number One and also a possible rendition of the War Child single, I Got Soul.The charity song is a re-working of The Killers' All These Things That I've Done and is set to feature artists like Chipmunk, Ironik and Pixie Lott.This year's Mobos will be held outside of London for the first time, in Glasgow on 30 September.N-Dubz said they were looking forward to performing for their Scottish fans and boasted about their recent shows north of the border."We just done Edinburgh the other day," said Dappy."We smashed up an N-Dubz show over there. We done Aberdeen about three or four months ago - we smashed up that show over there! Everywhere we go we smash it up!"` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/839/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/839/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/443 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/443/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/443/comments | https://api.github.com/repos/huggingface/datasets/issues/443/events | https://github.com/huggingface/datasets/issues/443 | 666,246,716 | MDU6SXNzdWU2NjYyNDY3MTY= | 443 | Cannot unpickle saved .pt dataset with torch.save()/load() | [] | closed | false | null | 1 | 2020-07-27T12:13:37Z | 2020-07-27T13:05:11Z | 2020-07-27T13:05:11Z | null | Saving a formatted torch dataset to file using `torch.save()`. Loading the same file fails during unpickling:
```python
>>> import torch
>>> import nlp
>>> squad = nlp.load_dataset("squad.py", split="train")
>>> squad
Dataset(features: {'source_text': Value(dtype='string', id=None), 'target_text': Value(dtype='string', id=None)}, num_rows: 87599)
>>> squad = squad.map(create_features, batched=True)
>>> squad.set_format(type="torch", columns=["source_ids", "target_ids", "attention_mask"])
>>> torch.save(squad, "squad.pt")
>>> squad_pt = torch.load("squad.pt")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/torch/serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/torch/serialization.py", line 773, in _legacy_load
result = unpickler.load()
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/splits.py", line 493, in __setitem__
raise ValueError("Cannot add elem. Use .add() instead.")
ValueError: Cannot add elem. Use .add() instead.
```
where `create_features` is a function that tokenizes the data using `batch_encode_plus` and returns a Dict with `input_ids`, `target_ids` and `attention_mask`.
```python
def create_features(batch):
source_text_encoding = tokenizer.batch_encode_plus(
batch["source_text"],
max_length=max_source_length,
pad_to_max_length=True,
truncation=True)
target_text_encoding = tokenizer.batch_encode_plus(
batch["target_text"],
max_length=max_target_length,
pad_to_max_length=True,
truncation=True)
features = {
"source_ids": source_text_encoding["input_ids"],
"target_ids": target_text_encoding["input_ids"],
"attention_mask": source_text_encoding["attention_mask"]
}
return features
```
I found a similar issue in [issue 5267 in the huggingface/transformers repo](https://github.com/huggingface/transformers/issues/5267) which was solved by downgrading to `nlp==0.2.0`. That did not solve this problem, however. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/443/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/443/timeline | null | completed | null | null | false | [
"This seems to be fixed in a non-released version. \r\n\r\nInstalling nlp from source\r\n```\r\ngit clone https://github.com/huggingface/nlp\r\ncd nlp\r\npip install .\r\n```\r\nsolves the issue. "
] |
https://api.github.com/repos/huggingface/datasets/issues/2054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2054/comments | https://api.github.com/repos/huggingface/datasets/issues/2054/events | https://github.com/huggingface/datasets/issues/2054 | 831,597,665 | MDU6SXNzdWU4MzE1OTc2NjU= | 2,054 | Could not find file for ZEST dataset | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 4 | 2021-03-15T09:11:58Z | 2021-05-03T09:30:24Z | 2021-05-03T09:30:24Z | null | I am trying to use zest dataset from Allen AI using below code in colab,
```
!pip install -q datasets
from datasets import load_dataset
dataset = load_dataset("zest")
```
I am getting the following error,
```
Using custom data configuration default
Downloading and preparing dataset zest/default (download: 5.53 MiB, generated: 19.96 MiB, post-processed: Unknown size, total: 25.48 MiB) to /root/.cache/huggingface/datasets/zest/default/0.0.0/1f7a230fbfc964d979bbca0f0130fbab3259fce547ee758ad8aa4f9c9bec6cca...
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-6-18dbbc1a4b8a> in <module>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("zest")
9 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
612 )
613 elif response is not None and response.status_code == 404:
--> 614 raise FileNotFoundError("Couldn't find file at {}".format(url))
615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
616 raise ConnectionError("Couldn't reach {}".format(url))
FileNotFoundError: Couldn't find file at https://ai2-datasets.s3-us-west-2.amazonaws.com/zest/zest.zip
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2054/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2054/timeline | null | completed | null | null | false | [
"The zest dataset url was changed (allenai/zest#3) and #2057 should resolve this.",
"This has been fixed in #2057 by @matt-peters (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)",
"Thanks @lhoestq and @matt-peters ",
"I am closing this issue since its fixed!"
] |
https://api.github.com/repos/huggingface/datasets/issues/247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/247/comments | https://api.github.com/repos/huggingface/datasets/issues/247/events | https://github.com/huggingface/datasets/pull/247 | 632,380,078 | MDExOlB1bGxSZXF1ZXN0NDI5MTMwMzQ2 | 247 | Make all dataset downloads deterministic by applying `sorted` to glob and os.listdir | [] | closed | false | null | 3 | 2020-06-06T11:02:10Z | 2020-06-08T09:18:16Z | 2020-06-08T09:18:14Z | null | This PR makes all datasets loading deterministic by applying `sorted()` to all `glob.glob` and `os.listdir` statements.
Are there other "non-deterministic" functions apart from `glob.glob()` and `os.listdir()` that you can think of @thomwolf @lhoestq @mariamabarham @jplu ?
**Important**
It does break backward compatibility for these datasets because
1. When loading the complete dataset the order in which the examples are saved is different now
2. When loading only part of a split, the examples themselves might be different.
@patrickvonplaten - the nlp / longformer notebook has to be updated since the examples might now be different | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/247/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/247/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/247.diff",
"html_url": "https://github.com/huggingface/datasets/pull/247",
"merged_at": "2020-06-08T09:18:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/247.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/247"
} | true | [
"That's great!\r\n\r\nI think it would be nice to test \"deterministic-ness\" of datasets in CI if we can do it (should be left for future PR of course)\r\n\r\nHere is a possibility (maybe there are other ways to do it): given that we may soon have efficient and large-scale hashing (cf our discussion on versioning/tracability), we could incorporate a hash of the final Arrow Dataset to the `dataset.json` file and have a test on it as well as CI on a diversity of platform to test the hash (Win/Mac/Linux + various python/env).\r\nWhat do you think @lhoestq @patrickvonplaten?",
"> That's great!\r\n> \r\n> I think it would be nice to test \"deterministic-ness\" of datasets in CI if we can do it (should be left for future PR of course)\r\n> \r\n> Here is a possibility (maybe there are other ways to do it): given that we may soon have efficient and large-scale hashing (cf our discussion on versioning/tracability), we could incorporate a hash of the final Arrow Dataset to the `dataset.json` file and have a test on it as well as CI on a diversity of platform to test the hash (Win/Mac/Linux + various python/env).\r\n> What do you think @lhoestq @patrickvonplaten?\r\n\r\nI think that's a great idea! The test should be a `RUN_SLOW` test, since it takes a considerable amount of time to download the dataset and generate the examples, but I think we should add some kind of hash test for each dataset.",
"Really nice!!"
] |
https://api.github.com/repos/huggingface/datasets/issues/1942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1942/comments | https://api.github.com/repos/huggingface/datasets/issues/1942/events | https://github.com/huggingface/datasets/issues/1942 | 816,037,520 | MDU6SXNzdWU4MTYwMzc1MjA= | 1,942 | [experiment] missing default_experiment-1-0.arrow | [] | closed | false | null | 18 | 2021-02-25T03:02:15Z | 2022-10-05T13:08:45Z | 2022-10-05T13:08:45Z | null | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/.cache/huggingface/metrics` - there are many `*.arrow.lock` files but zero metrics files.
w/o the network I get:
```
FileNotFoundError: [Errno 2] No such file or directory: '~/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow
```
there is just `~/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock`
I did run the same `run_seq2seq.py` script on the instance with network and it worked just fine, but only the lock file was left behind.
this is with master.
Thank you. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1942/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1942/timeline | null | completed | null | null | false | [
"Hi !\r\n\r\nThe cache at `~/.cache/huggingface/metrics` stores the users data for metrics computations (hence the arrow files).\r\n\r\nHowever python modules (i.e. dataset scripts, metric scripts) are stored in `~/.cache/huggingface/modules/datasets_modules`.\r\n\r\nIn particular the metrics are cached in `~/.cache/huggingface/modules/datasets_modules/metrics/`\r\n\r\nFeel free to take a look at your cache and let me know if you find any issue that would help explaining why you had an issue with `rouge` with no connection. I'm doing some tests on my side to try to reproduce the issue you have\r\n",
"Thank you for clarifying that the metrics files are to be found elsewhere, @lhoestq \r\n\r\n> The cache at ~/.cache/huggingface/metrics stores the users data for metrics computations (hence the arrow files).\r\n\r\ncould it be renamed to reflect that? otherwise it misleadingly suggests that it's the metrics. Perhaps `~/.cache/huggingface/metrics-user-data`?\r\n\r\nAnd there are so many `.lock` files w/o corresponding files under `~/.cache/huggingface/metrics/`. Why are they there? \r\n\r\nfor example after I wipe out the dir completely and do one training I end up with:\r\n```\r\n~/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock\r\n```\r\nwhat is that lock file locking when nothing is running?",
"The lock files come from an issue with filelock (see comment in the code [here](https://github.com/benediktschmitt/py-filelock/blob/master/filelock.py#L394-L398)). Basically on unix there're always .lock files left behind. I haven't dove into this issue",
"are you sure you need an external lock file? if it's a single purpose locking in the same scope you can lock the caller `__file__` instead, e.g. here is how one can `flock` the script file itself to ensure atomic printing:\r\n\r\n```\r\nimport fcntl\r\ndef printflock(*msgs):\r\n \"\"\" print in multiprocess env so that the outputs from different processes don't get interleaved \"\"\"\r\n with open(__file__, \"r\") as fh:\r\n fcntl.flock(fh, fcntl.LOCK_EX)\r\n try:\r\n print(*msgs)\r\n finally:\r\n fcntl.flock(fh, fcntl.LOCK_UN)\r\n```\r\n",
"OK, this issue is not about caching but some internal conflict/race condition it seems, I have just run into it on my normal env:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 356, in _finalize\r\n self.data = Dataset(**reader.read_files([{\"filename\": f} for f in file_paths]))\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 236, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 171, in _read_files\r\n pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 302, in _get_dataset_from_filename\r\n pa_table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 322, in read_table\r\n stream = stream_from(filename)\r\n File \"pyarrow/io.pxi\", line 782, in pyarrow.lib.memory_map\r\n File \"pyarrow/io.pxi\", line 743, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 97, in pyarrow.lib.check_status\r\nFileNotFoundError: [Errno 2] Failed to open local file '/home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"examples/seq2seq/run_seq2seq.py\", line 655, in <module>\r\n main()\r\n File \"examples/seq2seq/run_seq2seq.py\", line 619, in main\r\n test_results = trainer.predict(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer_seq2seq.py\", line 121, in predict\r\n return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py\", line 1706, in predict\r\n output = self.prediction_loop(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py\", line 1813, in prediction_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n File \"examples/seq2seq/run_seq2seq.py\", line 556, in compute_metrics\r\n result = metric.compute(predictions=decoded_preds, references=decoded_labels)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 388, in compute\r\n self._finalize()\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 358, in _finalize\r\n raise ValueError(\r\nValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.\r\n```\r\n\r\nI'm just running `run_seq2seq.py` under DeepSpeed:\r\n\r\n```\r\nexport BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 deepspeed --num_gpus=2 examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --do_predict --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_train_samples 100 --max_val_samples 100 --max_test_samples 100 --dataset_name wmt16 --dataset_config ro-en --source_prefix \"translate English to Romanian: \" --deepspeed examples/tests/deepspeed/ds_config.json\r\n```\r\n\r\nIt finished the evaluation OK and crashed on the prediction part of the Trainer. But the eval / predict parts no longer run under Deepspeed, it's just plain ddp.\r\n\r\nIs this some kind of race condition? It happens intermittently - there is nothing else running at the same time.\r\n\r\nBut if 2 independent instances of the same script were to run at the same time it's clear to see that this problem would happen. Perhaps it'd help to create a unique hash which is shared between all processes in the group and use that as the default experiment id?\r\n",
"When you're using metrics in a distributed setup, there are two cases:\r\n1. you're doing two completely different experiments (two evaluations) and the 2 metrics jobs have nothing to do with each other\r\n2. you're doing one experiment (one evaluation) but use multiple processes to feed the data to the metric.\r\n\r\nIn case 1. you just need to provide two different `experiment_id` so that the metrics don't collide.\r\nIn case 2. they must have the same experiment_id (or use the default one), but in this case you also need to provide the `num_processes` and `process_id`\r\n\r\nIf understand correctly you're in situation 2.\r\n\r\nIf so, you make sure that you instantiate the metrics with both the right `num_processes` and `process_id` parameters ?\r\n\r\nIf they're not set, then the cache files of the two metrics collide it can cause issues. For example if one metric finishes before the other, then the cache file is deleted and the other metric gets a FileNotFoundError\r\nThere's more information in the [documentation](https://huggingface.co/docs/datasets/loading_metrics.html#distributed-setups) if you want\r\n\r\nHope that helps !",
"Thank you for explaining that in a great way, @lhoestq \r\n\r\nSo the bottom line is that the `transformers` examples are broken since they don't do any of that. At least `run_seq2seq.py` just does `metric = load_metric(metric_name)`\r\n\r\nWhat test would you recommend to reliably reproduce this bug in `examples/seq2seq/run_seq2seq.py`?",
"To give more context, we are just using the metrics for the `comput_metric` function and nothing else. Is there something else we can use that just applies the function to the full arrays of predictions and labels? Because that's all we need, all the gathering has already been done because the datasets Metric multiprocessing relies on file storage and thus does not work in a multi-node distributed setup (whereas the Trainer does).\r\n\r\nOtherwise, we'll have to switch to something else to compute the metrics :-(",
"OK, it definitely leads to a race condition in how it's used right now. Here is how you can reproduce it - by injecting a random sleep time different for each process before the locks are acquired. \r\n```\r\n--- a/src/datasets/metric.py\r\n+++ b/src/datasets/metric.py\r\n@@ -348,6 +348,16 @@ class Metric(MetricInfoMixin):\r\n\r\n elif self.process_id == 0:\r\n # Let's acquire a lock on each node files to be sure they are finished writing\r\n+\r\n+ import time\r\n+ import random\r\n+ import os\r\n+ pid = os.getpid()\r\n+ random.seed(pid)\r\n+ secs = random.randint(1, 15)\r\n+ time.sleep(secs)\r\n+ print(f\"sleeping {secs}\")\r\n+\r\n file_paths, filelocks = self._get_all_cache_files()\r\n\r\n # Read the predictions and references\r\n@@ -385,7 +395,10 @@ class Metric(MetricInfoMixin):\r\n\r\n if predictions is not None:\r\n self.add_batch(predictions=predictions, references=references)\r\n+ print(\"FINALIZE START\")\r\n+\r\n self._finalize()\r\n+ print(\"FINALIZE END\")\r\n\r\n self.cache_file_name = None\r\n self.filelock = None\r\n```\r\n\r\nthen run with 2 procs: `python -m torch.distributed.launch --nproc_per_node=2`\r\n```\r\nexport BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --do_predict --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_train_samples 10 --max_val_samples 10 --max_test_samples 10 --dataset_name wmt16 --dataset_config ro-en --source_prefix \"translate English to Romanian: \"\r\n```\r\n\r\n```\r\n***** Running Evaluation *****\r\n Num examples = 10\r\n Batch size = 16\r\n 0%| | 0/1 [00:00<?, ?it/s]FINALIZE START\r\nFINALIZE START\r\nsleeping 11\r\nFINALIZE END\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:11<00:00, 11.06s/it]\r\nsleeping 11\r\nTraceback (most recent call last):\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 368, in _finalize\r\n self.data = Dataset(**reader.read_files([{\"filename\": f} for f in file_paths]))\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 236, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 171, in _read_files\r\n pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 302, in _get_dataset_from_filename\r\n pa_table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 322, in read_table\r\n stream = stream_from(filename)\r\n File \"pyarrow/io.pxi\", line 782, in pyarrow.lib.memory_map\r\n File \"pyarrow/io.pxi\", line 743, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 97, in pyarrow.lib.check_status\r\nFileNotFoundError: [Errno 2] Failed to open local file '/home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"examples/seq2seq/run_seq2seq.py\", line 645, in <module>\r\n main()\r\n File \"examples/seq2seq/run_seq2seq.py\", line 601, in main\r\n metrics = trainer.evaluate(\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer_seq2seq.py\", line 74, in evaluate\r\n return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer.py\", line 1703, in evaluate\r\n output = self.prediction_loop(\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer.py\", line 1876, in prediction_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n File \"examples/seq2seq/run_seq2seq.py\", line 556, in compute_metrics\r\n result = metric.compute(predictions=decoded_preds, references=decoded_labels)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 402, in compute\r\n self._finalize()\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 370, in _finalize\r\n raise ValueError(\r\nValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.\r\n```",
"I tried to adjust `run_seq2seq.py` and trainer to use the suggested dist env:\r\n```\r\n import torch.distributed as dist\r\n metric = load_metric(metric_name, num_process=dist.get_world_size(), process_id=dist.get_rank())\r\n```\r\nand in `trainer.py` added to call just for rank 0:\r\n```\r\n if self.is_world_process_zero() and self.compute_metrics is not None and preds is not None and label_ids is not None:\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n```\r\nand then the process hangs in a deadlock. \r\n\r\nHere is the tb:\r\n```\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/filelock.py\", line 275 in acquire\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 306 in _check_all_processes_locks\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 501 in _init_writer\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 440 in add_batch\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 397 in compute\r\n File \"examples/seq2seq/run_seq2seq.py\", line 558 in compute_metrics\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer.py\", line 1876 in prediction_loop\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer.py\", line 1703 in evaluate\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer_seq2seq.py\", line 74 in evaluate\r\n File \"examples/seq2seq/run_seq2seq.py\", line 603 in main\r\n File \"examples/seq2seq/run_seq2seq.py\", line 651 in <module>\r\n```\r\n\r\nBut this sounds right, since in the above diff I set up a distributed metric and only called one process - so it's blocking on waiting for other processes to do the same.\r\n\r\nSo one working solution is to leave:\r\n\r\n```\r\n metric = load_metric(metric_name)\r\n```\r\nalone, and only call `compute_metrics` from rank 0\r\n```\r\n if self.is_world_process_zero() and self.compute_metrics is not None and preds is not None and label_ids is not None:\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n```\r\n\r\nso we now no longer use the distributed env as far as `datasets` is concerned, it's just a single process.\r\n\r\nAre there any repercussions/side-effects to this proposed change in Trainer? If it always gathers all inputs on rank 0 then this is how it should have been done in first place - i.e. only run for rank 0. It appears that currently it was re-calculating the metrics on all processes on the same data just to throw the results away other than for rank 0. Unless I missed something.\r\n",
"But no, since \r\n`\r\n metric = load_metric(metric_name)\r\n`\r\nis called for each process, the race condition is still there. So still getting:\r\n\r\n```\r\nValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.\r\n```\r\n\r\ni.e. the only way to fix this is to `load_metric` only for rank 0, but this requires huge changes in the code and all end users' code.\r\n",
"OK, here is a workaround that works. The onus here is absolutely on the user:\r\n\r\n```\r\ndiff --git a/examples/seq2seq/run_seq2seq.py b/examples/seq2seq/run_seq2seq.py\r\nindex 2a060dac5..c82fd83ea 100755\r\n--- a/examples/seq2seq/run_seq2seq.py\r\n+++ b/examples/seq2seq/run_seq2seq.py\r\n@@ -520,7 +520,11 @@ def main():\r\n\r\n # Metric\r\n metric_name = \"rouge\" if data_args.task.startswith(\"summarization\") else \"sacrebleu\"\r\n- metric = load_metric(metric_name)\r\n+ import torch.distributed as dist\r\n+ if dist.is_initialized():\r\n+ metric = load_metric(metric_name, num_process=dist.get_world_size(), process_id=dist.get_rank())\r\n+ else:\r\n+ metric = load_metric(metric_name)\r\n\r\n def postprocess_text(preds, labels):\r\n preds = [pred.strip() for pred in preds]\r\n@@ -548,12 +552,17 @@ def main():\r\n # Some simple post-processing\r\n decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)\r\n\r\n+ kwargs = dict(predictions=decoded_preds, references=decoded_labels)\r\n+ if metric_name == \"rouge\":\r\n+ kwargs.update(use_stemmer=True)\r\n+ result = metric.compute(**kwargs) # must call for all processes\r\n+ if result is None: # only process with rank-0 will return metrics, others None\r\n+ return {}\r\n+\r\n if metric_name == \"rouge\":\r\n- result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)\r\n # Extract a few results from ROUGE\r\n result = {key: value.mid.fmeasure * 100 for key, value in result.items()}\r\n else:\r\n- result = metric.compute(predictions=decoded_preds, references=decoded_labels)\r\n result = {\"bleu\": result[\"score\"]}\r\n\r\n prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]\r\n```\r\n\r\nThis is not user-friendly to say the least. And it's still wasteful as we don't need other processes to do anything.\r\n\r\nBut it solves the current race condition.\r\n\r\nClearly this calls for a design discussion as it's the responsibility of the Trainer to handle this and not user's. Perhaps in the `transformers` land?",
"I don't see how this could be the responsibility of `Trainer`, who hasn't the faintest idea of what a `datasets.Metric` is. The trainer takes a function `compute_metrics` that goes from predictions + labels to metric results, there is nothing there. That computation is done on all processes \r\n\r\nThe fact a `datasets.Metric` object cannot be used as a simple compute function in a multi-process environment is, in my opinion, a bug in `datasets`. Especially since, as I mentioned before, the multiprocessing part of `datasets.Metric` has a deep flaw since it can't work in a multinode environment. So you actually need to do the job of gather predictions and labels yourself.\r\n\r\nThe changes you are proposing Stas are making the code less readable and also concatenate all the predictions and labels `number_of_processes` times I believe, which is not going to make the metric computation any faster.\r\n\r\n",
"Right, to clarify, I meant it'd be good to have it sorted on the library side and not requiring the user to figure it out. This is too complex and error-prone and if not coded correctly the bug will be intermittent which is even worse.\r\n\r\nOh I guess I wasn't clear in my message - in no way I'm proposing that we use this workaround code - I was just showing what I had to do to make it work.\r\n\r\nWe are on the same page.\r\n\r\n> The changes you are proposing Stas are making the code less readable and also concatenate all the predictions and labels number_of_processes times I believe, which is not going to make the metric computation any faster.\r\n\r\nAnd yes, this is another problem that my workaround introduces. Thank you for pointing it out, @sgugger \r\n",
"> The fact a datasets.Metric object cannot be used as a simple compute function in a multi-process environment is, in my opinion, a bug in datasets\r\n\r\nYes totally, this use case is supposed to be supported by `datasets`. And in this case there shouldn't be any collision between the metrics. I'm looking into it :)\r\nMy guess is that at one point the metric isn't using the right file name. It's supposed to use one with a unique uuid in order to avoid the collisions.",
"I just opened #1966 to fix this :)\r\n@stas00 if have a chance feel free to try it !",
"Thank you, @lhoestq - I will experiment and report back. \r\n\r\nedit: It works! Thank you",
"Fixed in https://github.com/huggingface/datasets/pull/1966"
] |
https://api.github.com/repos/huggingface/datasets/issues/1999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1999/comments | https://api.github.com/repos/huggingface/datasets/issues/1999/events | https://github.com/huggingface/datasets/pull/1999 | 823,753,591 | MDExOlB1bGxSZXF1ZXN0NTg2MTM5ODMy | 1,999 | Add FashionMNIST dataset | [] | closed | false | null | 1 | 2021-03-06T21:36:57Z | 2021-03-09T09:52:11Z | 2021-03-09T09:52:11Z | null | This PR adds [FashionMNIST](https://github.com/zalandoresearch/fashion-mnist) dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1999/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1999/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1999.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1999",
"merged_at": "2021-03-09T09:52:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1999.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1999"
} | true | [
"Hi @lhoestq,\r\n\r\nI have added the changes from the review."
] |
https://api.github.com/repos/huggingface/datasets/issues/1052 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1052/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1052/comments | https://api.github.com/repos/huggingface/datasets/issues/1052/events | https://github.com/huggingface/datasets/pull/1052 | 756,171,798 | MDExOlB1bGxSZXF1ZXN0NTMxNzU5MjA0 | 1,052 | add sharc dataset | [] | closed | false | null | 0 | 2020-12-03T12:57:23Z | 2020-12-03T16:44:21Z | 2020-12-03T14:09:54Z | null | This PR adds the ShARC dataset.
More info:
https://sharc-data.github.io/index.html | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1052/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1052/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1052.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1052",
"merged_at": "2020-12-03T14:09:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1052.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1052"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1133/comments | https://api.github.com/repos/huggingface/datasets/issues/1133/events | https://github.com/huggingface/datasets/pull/1133 | 757,307,660 | MDExOlB1bGxSZXF1ZXN0NTMyNzA1ODQ4 | 1,133 | Adding XQUAD-R Dataset | [] | closed | false | null | 0 | 2020-12-04T18:22:29Z | 2020-12-04T18:28:54Z | 2020-12-04T18:28:49Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1133/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1133/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1133.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1133",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1133.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1133"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/3538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3538/comments | https://api.github.com/repos/huggingface/datasets/issues/3538/events | https://github.com/huggingface/datasets/pull/3538 | 1,094,756,755 | PR_kwDODunzps4wlLmD | 3,538 | Readme usage update | [] | closed | false | null | 0 | 2022-01-05T21:26:28Z | 2022-01-05T23:34:25Z | 2022-01-05T23:24:15Z | null | Noticing that the recent commit throws a lot of errors in the automatic checks. It looks to me that those errors are simply errors that were already there (metadata issues), unrelated to what I've just changed, but worth another look to make sure. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3538/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3538/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3538.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3538",
"merged_at": "2022-01-05T23:24:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3538.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3538"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2194/comments | https://api.github.com/repos/huggingface/datasets/issues/2194/events | https://github.com/huggingface/datasets/issues/2194 | 853,909,452 | MDU6SXNzdWU4NTM5MDk0NTI= | 2,194 | py3.7: TypeError: can't pickle _LazyModule objects | [] | closed | false | null | 1 | 2021-04-08T21:02:48Z | 2021-04-09T16:56:50Z | 2021-04-09T01:52:57Z | null | While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install:
```
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e .[testing]
export BS=1; rm -rf /tmp/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \
examples/language-modeling/run_clm.py --model_name_or_path distilgpt2 --dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 --do_train --max_train_samples 1 \
--per_device_train_batch_size $BS --output_dir /tmp/test-clm --block_size 128 --logging_steps 1 \
--fp16
```
```
Traceback (most recent call last):
File "examples/language-modeling/run_clm.py", line 453, in <module>
main()
File "examples/language-modeling/run_clm.py", line 336, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in map
for k, dataset in self.items()
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp>
for k, dataset in self.items()
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1259, in map
update_data=update_data,
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 157, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 158, in wrapper
self._fingerprint, transform, kwargs_for_fingerprint
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint
hasher.update(transform_args[key])
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 389, in dumps
dump(obj, file)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 361, in dump
Pickler(file, recurse=True).dump(obj)
File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump
StockPickler.dump(self, obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 556, in save_function
obj=obj,
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 524, in save
rv = reduce(self.proto)
TypeError: can't pickle _LazyModule objects
```
```
$ python --version
Python 3.7.4
$ python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 1.8.0.dev20210110+cu110
Is debug build: False
CUDA used to build PyTorch: 11.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
```
Thanks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2194/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2194/timeline | null | completed | null | null | false | [
"\r\nThis wasn't a `datasets` problem, but `transformers`' and it was solved here https://github.com/huggingface/transformers/pull/11168\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1072/comments | https://api.github.com/repos/huggingface/datasets/issues/1072/events | https://github.com/huggingface/datasets/pull/1072 | 756,454,511 | MDExOlB1bGxSZXF1ZXN0NTMxOTk2Njky | 1,072 | actually uses the previously declared VERSION on the configs in the template | [] | closed | false | null | 0 | 2020-12-03T18:44:27Z | 2020-12-03T19:35:46Z | 2020-12-03T19:35:46Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1072/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1072/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1072.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1072",
"merged_at": "2020-12-03T19:35:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1072.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1072"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/406/comments | https://api.github.com/repos/huggingface/datasets/issues/406/events | https://github.com/huggingface/datasets/issues/406 | 658,581,764 | MDU6SXNzdWU2NTg1ODE3NjQ= | 406 | Faster Shuffling? | [] | closed | false | null | 4 | 2020-07-16T21:21:53Z | 2020-09-07T14:45:26Z | 2020-09-07T14:45:25Z | null | Consider shuffling bookcorpus:
```
dataset = nlp.load_dataset('bookcorpus', split='train')
dataset.shuffle()
```
According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`.
But I can also just write the lines to a text file:
```
batch_size = 100000
with open('tmp.txt', 'w+') as out_f:
for i in tqdm(range(0, len(dataset), batch_size)):
batch = dataset[i:i+batch_size]['text']
print("\n".join(batch), file=out_f)
```
Which completes in a couple minutes, followed by `shuf tmp.txt > tmp2.txt` which completes in under a minute. And finally,
```
dataset = nlp.load_dataset('text', data_files='tmp2.txt')
```
Which completes in under 10 minutes. I read up on Apache Arrow this morning, and it seems like the columnar data format is not especially well-suited to shuffling rows, since moving items around requires a lot of book-keeping.
Is shuffle inherently slow, or am I just using it wrong? And if it is slow, would it make sense to try converting the data to a row-based format on disk and then shuffling? (Instead of calling select with a random permutation, as is currently done.) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/406/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/406/timeline | null | completed | null | null | false | [
"I think the slowness here probably come from the fact that we are copying from and to python.\r\n\r\n@lhoestq for all the `select`-based methods I think we should stay in Arrow format and update the writer so that it can accept Arrow tables or batches as well. What do you think?",
"> @lhoestq for all the `select`-based methods I think we should stay in Arrow format and update the writer so that it can accept Arrow tables or batches as well. What do you think?\r\n\r\nI just tried with `writer.write_table` with tables of 1000 elements and it's slower that the solution in #405 \r\n\r\nOn my side (select 10 000 examples):\r\n- Original implementation: 12s\r\n- Batched solution: 100ms\r\n- solution using arrow tables: 350ms\r\n\r\nI'll try with arrays and record batches to see if we can make it work.",
"I tried using `.take` from pyarrow recordbatches but it doesn't improve the speed that much:\r\n```python\r\nimport nlp\r\nimport numpy as np\r\n\r\ndset = nlp.Dataset.from_file(\"dummy_test_select.arrow\") # dummy dataset with 100000 examples like {\"a\": \"h\"*512}\r\nindices = np.random.randint(0, 100_000, 1000_000)\r\n```\r\n\r\n```python\r\n%%time\r\nbatch_size = 10_000\r\nwriter = ArrowWriter(schema=dset.schema, path=\"dummy_path\",\r\n writer_batch_size=1000, disable_nullable=False)\r\nfor i in tqdm(range(0, len(indices), batch_size)):\r\n table = pa.concat_tables(dset._data.slice(int(i), 1) for i in indices[i : min(len(indices), i + batch_size)])\r\n batch = table.to_pydict()\r\n writer.write_batch(batch)\r\nwriter.finalize()\r\n# 9.12s\r\n```\r\n\r\n\r\n```python\r\n%%time\r\nbatch_size = 10_000\r\nwriter = ArrowWriter(schema=dset.schema, path=\"dummy_path\", \r\n writer_batch_size=1000, disable_nullable=False)\r\nfor i in tqdm(range(0, len(indices), batch_size)):\r\n batch_indices = indices[i : min(len(indices), i + batch_size)]\r\n # First, extract only the indices that we need with a mask\r\n mask = [False] * len(dset)\r\n for k in batch_indices:\r\n mask[k] = True\r\n t_batch = dset._data.filter(pa.array(mask))\r\n # Second, build the list of indices for the filtered table, and taking care of duplicates\r\n rev_positions = {}\r\n duplicates = 0\r\n for i, j in enumerate(sorted(batch_indices)):\r\n if j in rev_positions:\r\n duplicates += 1\r\n else:\r\n rev_positions[j] = i - duplicates\r\n rev_map = [rev_positions[j] for j in batch_indices]\r\n # Third, use `.take` from the combined recordbatch\r\n t_combined = t_batch.combine_chunks() # load in memory\r\n recordbatch = t_combined.to_batches()[0]\r\n table = pa.Table.from_arrays(\r\n [recordbatch[c].take(pa.array(rev_map)) for c in range(len(dset._data.column_names))],\r\n schema=writer.schema\r\n )\r\n writer.write_table(table)\r\nwriter.finalize()\r\n# 3.2s\r\n```\r\n",
"Shuffling is now significantly faster thanks to #513 \r\nFeel free to play with it now :)\r\n\r\nClosing this one, but feel free to re-open if you have other questions"
] |
https://api.github.com/repos/huggingface/datasets/issues/4539 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4539/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4539/comments | https://api.github.com/repos/huggingface/datasets/issues/4539/events | https://github.com/huggingface/datasets/pull/4539 | 1,279,779,829 | PR_kwDODunzps46GfWv | 4,539 | Replace deprecated logging.warn with logging.warning | [] | closed | false | null | 0 | 2022-06-22T08:32:29Z | 2022-06-22T13:43:23Z | 2022-06-22T12:51:51Z | null | Replace `logging.warn` (deprecated in [Python 2.7, 2011](https://github.com/python/cpython/commit/04d5bc00a219860c69ea17eaa633d3ab9917409f)) with `logging.warning` (added in [Python 2.3, 2003](https://github.com/python/cpython/commit/6fa635df7aa88ae9fd8b41ae42743341316c90f7)).
* https://docs.python.org/3/library/logging.html#logging.Logger.warning
* https://github.com/python/cpython/issues/57444
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4539/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4539/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4539.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4539",
"merged_at": "2022-06-22T12:51:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4539.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4539"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5706 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5706/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5706/comments | https://api.github.com/repos/huggingface/datasets/issues/5706/events | https://github.com/huggingface/datasets/issues/5706 | 1,653,545,835 | I_kwDODunzps5ijxtr | 5,706 | Support categorical data types for Parquet | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 4 | 2023-04-04T09:45:35Z | 2023-05-12T19:21:43Z | null | null | ### Feature request
Huggingface datasets does not seem to support categorical / dictionary data types for Parquet as of now. There seems to be a `TODO` in the code for this feature but no implementation yet. Below you can find sample code to reproduce the error that is currently thrown when attempting to read a Parquet file with categorical columns:
```python
import pandas as pd
import pyarrow.parquet as pq
from datasets import load_dataset
# Create categorical sample DataFrame
df = pd.DataFrame({'type': ['foo', 'bar']}).astype('category')
df.to_parquet('data.parquet')
# Read back as pyarrow table
table = pq.read_table('data.parquet')
print(table.schema)
# type: dictionary<values=string, indices=int32, ordered=0>
# Load with huggingface datasets
load_dataset('parquet', data_files='data.parquet')
```
Error:
```
Traceback (most recent call last):
File ".venv/lib/python3.10/site-packages/datasets/builder.py", line 1875, in _prepare_split_single
writer.write_table(table)
File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 566, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 379, in _build_writer
inferred_features = Features.from_arrow_schema(inferred_schema)
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in from_arrow_schema
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in <dictcomp>
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1361, in generate_from_arrow_type
raise NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table
NotImplementedError
```
### Motivation
Categorical data types, as offered by Pandas and implemented with the `DictionaryType` dtype in `pyarrow` can significantly reduce dataset size and are a handy way to turn textual features into numerical representations and back. Lack of support in Huggingface datasets greatly reduces compatibility with a common Pandas / Parquet feature.
### Your contribution
I could provide a PR. However, it would be nice to have an initial complexity estimate from one of the core developers first. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5706/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5706/timeline | null | null | null | null | false | [
"Hi ! We could definitely a type that holds the categories and uses a DictionaryType storage. There's a ClassLabel type that is similar with a 'names' parameter (similar to a id2label in deep learning frameworks) that uses an integer array as storage.\r\n\r\nIt can be added in `features.py`. Here are some pointers:\r\n- the conversion from HF type to PyArrow type is done in `get_nested_type`\r\n- the conversion from Pyarrow type to HF type is done in `generate_from_arrow_type`\r\n- `encode_nested_example` and `decode_nested_example` are used to do user's value (what users see) <-> storage value (what is in the pyarrow.array) if there's any conversion to do",
"@kklemon did you implement this? Otherwise I would like to give it a try",
"@mhattingpete no, I hadn't time for this so far. Feel free to work on this.",
"#self-assign",
"This would be super useful, so +1. \r\n\r\nAlso, these prior issues/PRs seem relevant: \r\nhttps://github.com/huggingface/datasets/issues/1906\r\nhttps://github.com/huggingface/datasets/pull/1936"
] |
https://api.github.com/repos/huggingface/datasets/issues/2676 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2676/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2676/comments | https://api.github.com/repos/huggingface/datasets/issues/2676/events | https://github.com/huggingface/datasets/pull/2676 | 947,734,909 | MDExOlB1bGxSZXF1ZXN0NjkyNjc2NTg5 | 2,676 | Increase json reader block_size automatically | [] | closed | false | null | 0 | 2021-07-19T14:51:14Z | 2021-07-19T17:51:39Z | 2021-07-19T17:51:38Z | null | Currently some files can't be read with the default parameters of the JSON lines reader.
For example this one:
https://huggingface.co/datasets/thomwolf/codeparrot/resolve/main/file-000000000006.json.gz
raises a pyarrow error:
```python
ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
```
The block size that is used is the default one by pyarrow (related to this [jira issue](https://issues.apache.org/jira/browse/ARROW-9612)).
To fix this issue I changed the block_size to increase automatically if there is a straddling issue when parsing a batch of json lines.
By default the value is `chunksize // 32` in order to leverage multithreading, and it doubles every time a straddling issue occurs. The block_size is then reset for each file.
cc @thomwolf @albertvillanova | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2676/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2676/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2676.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2676",
"merged_at": "2021-07-19T17:51:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2676.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2676"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1575 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1575/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1575/comments | https://api.github.com/repos/huggingface/datasets/issues/1575/events | https://github.com/huggingface/datasets/pull/1575 | 767,076,374 | MDExOlB1bGxSZXF1ZXN0NTM5OTEzNzgx | 1,575 | Hind_Encorp all done | [] | closed | false | null | 11 | 2020-12-15T01:36:02Z | 2020-12-16T15:15:17Z | 2020-12-16T15:15:17Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1575/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1575/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1575.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1575",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1575.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1575"
} | true | [
"ALL TEST PASSED locally @yjernite ",
"@rahul-art kindly run the following from the datasets folder \r\n\r\n```\r\nmake style \r\nflake8 datasets\r\n\r\n```\r\n",
"@skyprince999 I did that before it says all done \r\n",
"I did that again it gives the same output all done and then I synchronized my changes with this branch ",
"@lhoestq i did all the changes you suggested but at the time of load_dataset it is giving me error\r\n`**`datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=76591256, num_examples=1, dataset_name='hind_encorp'), 'recorded': SplitInfo(name='train', num_bytes=78945714, num_examples=273885, dataset_name='hind_encorp')}]`**`",
"\r\n\r\n\r\nI cloned the branch and it seems to work fine at my end. try to clear the cache - \r\n\r\n```\r\nrm -rf /home/ubuntu/.cache/huggingface/datasets/\r\nrm -rf /home/ubuntu/.cache/huggingface/modules/datasets_modules//datasets/\r\n```\r\nBut the dataset has only one record. Is that correct? \r\n\r\n",
"> @lhoestq i did all the changes you suggested but at the time of load_dataset it is giving me error\r\n> `**`datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=76591256, num_examples=1, dataset_name='hind_encorp'), 'recorded': SplitInfo(name='train', num_bytes=78945714, num_examples=273885, dataset_name='hind_encorp')}]`**`\r\n\r\nYou can ignore this error by adding `ignore_verifications=True` to `load_dataset`.\r\n\r\nThis error is raised because you're loading a dataset that you've already loaded once in the past. Therefore the library does some verifications to make sure it's generated the same way. \r\n\r\nHowever since you've done changes in the dataset script you should ignore these verifications.\r\n\r\nYou can regenerate the dataset_infos.json with\r\n```\r\ndatasets-cli test ./datasets/hindi_encorp --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\n\r\n> I cloned the branch and it seems to work fine at my end. try to clear the cache -\r\n> \r\n> ```\r\n> rm -rf /home/ubuntu/.cache/huggingface/datasets/\r\n> rm -rf /home/ubuntu/.cache/huggingface/modules/datasets_modules//datasets/\r\n> ```\r\n> \r\n> But the dataset has only one record. Is that correct?\r\n> \r\n\r\nYes the current parsing is wrong, I've already given @rahul-art some suggestions and it looks like it works way better now (num_examples=273885).\r\n\r\nThanks for fixing the parsing @rahul-art !\r\nFeel free to commit and push your changes once it's ready :) ",
"i ran the command you provided datasets-cli test ./datasets/hindi_encorp --save_infos --all_configs --ignore_verifications \r\nbut now its giving this error @lhoestq \r\n\r\nFileNotFoundError: Couldn't find file locally at ./datasets/hindi_encorp/hindi_encorp.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/./datasets/hindi_encorp/hindi_encorp.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/./datasets/hindi_encorp/hindi_encorp.py.\r\nIf the dataset was added recently, you may need to to pass script_version=\"master\" to find the loading script on the master branch.\r\n",
"whoops I meant `hind_encorp` instead of `hindi_encorp` sorry",
"@lhoestq all changes have done successfully in this PR #1584",
"Ok thanks ! closing this one in favor of #1584 "
] |
|
https://api.github.com/repos/huggingface/datasets/issues/4331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4331/comments | https://api.github.com/repos/huggingface/datasets/issues/4331/events | https://github.com/huggingface/datasets/pull/4331 | 1,234,016,110 | PR_kwDODunzps43uN2R | 4,331 | Adding eval metadata to Amazon Polarity | [] | closed | false | null | 0 | 2022-05-12T13:47:59Z | 2022-05-12T21:03:14Z | 2022-05-12T21:03:13Z | null | Adding eval metadata to Amazon Polarity | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4331/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4331/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4331.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4331",
"merged_at": "2022-05-12T21:03:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4331.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4331"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/853/comments | https://api.github.com/repos/huggingface/datasets/issues/853/events | https://github.com/huggingface/datasets/issues/853 | 743,426,583 | MDU6SXNzdWU3NDM0MjY1ODM= | 853 | concatenate_datasets support axis=0 or 1 ? | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "008672",
"default": true,
"description": "Extra attention is needed",
"id": 1935892884,
"name": "help wanted",
"node_id": "MDU6TGFiZWwxOTM1ODkyODg0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted"
},
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | 10 | 2020-11-16T02:46:23Z | 2021-04-19T16:07:18Z | 2021-04-19T16:07:18Z | null | I want to achieve the following result

| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/853/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/853/timeline | null | completed | null | null | false | [
"Unfortunately `concatenate_datasets` only supports concatenating the rows, while what you want to achieve is concatenate the columns.\r\nCurrently to add more columns to a dataset, one must use `map`.\r\nWhat you can do is somehting like this:\r\n```python\r\n# suppose you have datasets d1, d2, d3\r\ndef add_columns(example, index):\r\n example.update(d2[index])\r\n example.update(d3[index])\r\n return example\r\n\r\nfull_dataset = d1.map(add_columns, with_indices=True)\r\n```",
"Closing this one, feel free to re-open if you have other questions about this issue",
"That's not really difficult to add, though, no?\r\nI think it can be done without copy.\r\nMaybe let's add it to the roadmap?",
"Actually it's doable but requires to update the `Dataset._data_files` schema to support this.\r\nI'm re-opening this since we may want to add this in the future",
"Hi @lhoestq, I would love to help and add this feature if still needed. My plan is to add an axis variable in the `concatenate_datasets` function in `arrow_dataset.py` and when that is set to 1 concatenate columns instead of rows. ",
"Hi ! I would love to see this feature implemented as well :) Thank you for proposing your help !\r\n\r\nHere is a few things about the current implementation:\r\n- A dataset object is a wrapper of one `pyarrow.Table` that contains the data\r\n- Pyarrow offers an API that allows to transform Table objects. For example there are functions like `concat_tables`, `Table.rename_columns`, `Table.add_column` etc.\r\n\r\nTherefore adding columns from another dataset is possible thanks to the pyarrow API and in particular `Table.add_column` :) \r\n\r\nHowever this breaks some features we have regarding pickle. A dataset object can be pickled and unpickled without loading all the data in memory. It is useful for multiprocessing for example. Pickling a dataset object is possible thanks to the `Dataset._data_files` which defines the list of arrow files that will be used to form the final Table (basically all the data from each files are concatenated on axis 0).\r\n\r\nTherefore to be able to add columns to a Dataset and still be able to work with it in a multiprocessing setup, we need to extend this last aspect to be able to reconstruct a Table object from multiple arrow files that are combined in both axis 0 and 1. Currently this reconstruction mechanism only supports axis 0.\r\n\r\nI'm sure we can figure something out that enables users to add columns from another dataset while keeping the multiprocessing support.",
"@lhoestq, we have two Pull Requests to implement:\r\n- Dataset.add_item: #1870\r\n- Dataset.add_column: #2145\r\nwhich add a single row or column, repectively.\r\n\r\nThe request here is to implement the concatenation of *multiple* rows/columns. Am I right?\r\n\r\nWe should agree on the API:\r\n- `concatenate_datasets` with `axis`?\r\n- other Dataset method name?",
"For the API, I like `concatenate_datasets` with `axis` personally :)\r\nFrom a list of `Dataset` objects, it would concatenate them to a new `Dataset` object backed by a `ConcatenationTable`, that is the concatenation of the tables of each input dataset. The concatenation is either on axis=0 (append rows) or on axis=1 (append columns).\r\n\r\nRegarding what we need to implement:\r\nThe axis=0 is already supported and is the current behavior of `concatenate_datasets`.\r\nAlso `add_item` is not needed to implement axis=1 (though it's an awesome addition to this library).\r\n\r\nTo implement axis=1, we either need `add_column` or a `ConcatenationTable` constructor to concatenate tables horizontally.\r\nI have a preference for using a `ConcatenationTable` constructor because this way we can end up with a `ConcatenationTable` with only 1 additional block per table, while `add_column` would add 1 block per new column.\r\n\r\nMaybe we can simply have an equivalent of `ConcatenationTable.from_tables` but for axis=1 ?\r\n`axis` could also be an argument of `ConcatenationTable.from_tables`",
"@lhoestq I think I guessed your suggestions in advance... 😉 #2151",
"Cool ! Sorry I missed this one ^^\r\nI'm taking a look ;)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3329/comments | https://api.github.com/repos/huggingface/datasets/issues/3329/events | https://github.com/huggingface/datasets/issues/3329 | 1,065,096,971 | I_kwDODunzps4_fBcL | 3,329 | Map function: Type error on iter #999 | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2021-11-27T17:53:05Z | 2021-11-29T20:40:15Z | 2021-11-29T20:40:15Z | null | ## Describe the bug
Using the map function, it throws a type error on iter #999
Here is the code I am calling:
```
dataset = datasets.load_dataset('squad')
dataset['validation'].map(text_numbers_to_int, input_columns=['context'], fn_kwargs={'column': 'context'})
```
text_numbers_to_int returns the input text with numbers replaced in the format {'context': text}
It happens at
`
File "C:\Users\lonek\anaconda3\envs\ai\Lib\site-packages\datasets\arrow_writer.py", line 289, in <listcomp>
[row[0][col] for row in self.current_examples], type=col_type, try_type=col_try_type, col=col
`
The issue is that the list comprehension expects self.current_examples to be type tuple(dict, str), but for some reason 26 out of 1000 of the sefl.current_examples are type tuple(str, str)
Here is an example of what self.current_examples should be
({'context': 'Super Bowl 50 was an...merals 50.'}, '')
Here is an example of what self.current_examples are when it throws the error:
('The Panthers used th... Marriott.', '')
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3329/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3329/timeline | null | completed | null | null | false | [
"Hi, thanks for reporting.\r\n\r\nIt would be really helpful if you could provide the actual code of the `text_numbers_to_int` function so we can reproduce the error.",
"```\r\ndef text_numbers_to_int(text, column=\"\"):\r\n \"\"\"\r\n Convert text numbers to int.\r\n\r\n :param text: text numbers\r\n :return: int\r\n \"\"\"\r\n try:\r\n numbers = find_numbers(text)\r\n if not numbers:\r\n return text\r\n result = \"\"\r\n i, j = 0, 0\r\n while i < len(text):\r\n if j < len(numbers) and i == numbers[j][1]:\r\n n = int(numbers[j][0]) if numbers[j][0] % 1 == 0 else float(numbers[j][0])\r\n result += str(n)\r\n i = numbers[j][2] #end\r\n j += 1\r\n else:\r\n result += text[i]\r\n i += 1\r\n if column:\r\n return{column: result}\r\n else:\r\n return {column: result}\r\n except Exception as e:\r\n print(e)\r\n return {column: result}\r\n```",
"Maybe this is because of the `return text` line ? I think it should return a dictionary rather than a string",
"Yes that was it, good catch! Thanks"
] |
https://api.github.com/repos/huggingface/datasets/issues/5980 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5980/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5980/comments | https://api.github.com/repos/huggingface/datasets/issues/5980/events | https://github.com/huggingface/datasets/issues/5980 | 1,770,255,973 | I_kwDODunzps5pg_Zl | 5,980 | Viewing dataset card returns “502 Bad Gateway” | [] | closed | false | null | 3 | 2023-06-22T19:14:48Z | 2023-06-27T08:38:19Z | 2023-06-26T14:42:45Z | null | The url is: https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams
I am able to successfully view the “Files and versions” tab: [Confirm-Labs/pile_ngrams_trigrams at main](https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams/tree/main)
Any help would be appreciated! Thanks! I hope this is the right place to report an issue like this.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5980/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5980/timeline | null | completed | null | null | false | [
"Can you try again? Maybe there was a minor outage.",
"Yes, it seems to be working now. In case it's helpful, the outage lasted several days. It was failing as late as yesterday morning. ",
"we fixed something on the server side, glad it's fixed now"
] |
https://api.github.com/repos/huggingface/datasets/issues/5200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5200/comments | https://api.github.com/repos/huggingface/datasets/issues/5200/events | https://github.com/huggingface/datasets/issues/5200 | 1,435,831,559 | I_kwDODunzps5VlQ0H | 5,200 | Some links to canonical datasets in the docs are outdated | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 1 | 2022-11-04T10:06:21Z | 2022-11-07T18:40:20Z | 2022-11-07T18:40:20Z | null | As we don't have canonical datasets in the github repo anymore, some old links to them doesn't work. I don't know how many of them are there, I found link to SuperGlue here: https://huggingface.co/docs/datasets/dataset_script#multiple-configurations, probably there are more of them. These links should be replaced by links to the corresponding datasets on the Hub. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5200/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5200/timeline | null | completed | null | null | false | [
"Thanks for catching this, I can go through the docs and replace the links to their corresponding datasets on the Hub!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3357 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3357/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3357/comments | https://api.github.com/repos/huggingface/datasets/issues/3357/events | https://github.com/huggingface/datasets/pull/3357 | 1,068,607,382 | PR_kwDODunzps4vQmcL | 3,357 | Update languages in aeslc dataset card | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 0 | 2021-12-01T16:20:46Z | 2022-09-23T13:16:49Z | 2022-09-23T13:16:49Z | null | After having worked a bit with the dataset.
As far as I know, it is solely in English (en-US). There are only a few mails in Spanish, French or German (less than a dozen I would estimate). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3357/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3357/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3357.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3357",
"merged_at": "2022-09-23T13:16:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3357.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3357"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/749 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/749/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/749/comments | https://api.github.com/repos/huggingface/datasets/issues/749/events | https://github.com/huggingface/datasets/issues/749 | 726,366,062 | MDU6SXNzdWU3MjYzNjYwNjI= | 749 | [XGLUE] Adding new dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 15 | 2020-10-21T10:51:36Z | 2022-09-30T11:35:30Z | 2021-01-06T10:02:55Z | null | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/749/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/749/timeline | null | completed | null | null | false | [
"Amazing! ",
"Small poll @thomwolf @yjernite @lhoestq @JetRunner @qiweizhen .\r\n\r\nAs stated in the XGLUE paper: https://arxiv.org/pdf/2004.01401.pdf , for each of the 11 down-stream tasks training data is only available in English, whereas development and test data is available in multiple different language *cf.* here: \r\n\r\n\r\n\r\nSo, I'd suggest to have exactly 11 \"language-independent\" configs: \"ner\", \"pos\", ... and give the sample in each dataset in the config a \"language\" label being one of \"ar\", \"bg\", .... => To me this makes more sense than making languaga specific config, *e.g.* \"ner-de\", ...especially because training data is only available in English. Do you guys agree? ",
"In this case we should have named splits, so config `ner` has splits `train`, `validation`, `test-en`, `test-ar`, `test-bg`, etc...\r\n\r\nThis is more in the spirit of the task afaiu, and will avoid making users do the filtering step themselves when testing different models or different configurations of the same model.",
"I see your point! \r\n\r\nI think this would be quite feasible to do and makes sense to me as well! In the paper results are reported per language, so it seems more natural to do it this way. \r\n\r\nGood for me @yjernite ! What do the others think? @lhoestq \r\n",
"I agree with Yacine on this!",
"Okey actually not that easy to add things like `test-de` to `datasets` => this would be the first dataset to have this.\r\nSee: https://github.com/huggingface/datasets/pull/802",
"IMO we should have one config per language. That's what we're doing for xnli, xtreme etc.\r\nHaving split names that depend on the language seems wrong. We should try to avoid split names that are not train/val/test.\r\nSorry for late response on this one",
"@lhoestq agreed on having one config per language, but we also need to be able to have different split names and people are going to want to use hyphens, so we should at the very least warn them why it's failing :) E.g. for ANLI with different stages of data (currently using underscores) or https://www.tau-nlp.org/commonsenseqa with their train-sanity or dev-sanity splits",
"Yes sure ! Could you open a separate issue for that ?",
"Really cool dataset 👍 btw. does Transformers support all 11 tasks 🤔 would be awesome to have a xglue script (like the \"normal\" glue one)",
"Just to make sure this is what we want here. If we add one config per language, \r\n\r\nthis means that this dataset ends up with well over 100 different configs most of which will have the same `train` split. The train split is always in English. Also, I'm not sure whether it's better for the user to be honest. \r\n\r\nI think it could be quite confusing for the user to have\r\n\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner-de\", split=\"train\")\r\n```\r\n\r\nin English even though it's `ner-de`.\r\n\r\nTo be honest, I'd prefer:\r\n\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner\", split=\"train\")\r\ntest_dataset_de = load_dataset(\"xglue\", \"ner\", split=\"test-de\")\r\ntest_dataset_fr = load_dataset(\"xglue\", \"ner\", split=\"test-fr\")\r\n```\r\n\r\nhere",
"Oh yes right I didn't notice the train set was always in english sorry.\r\nMoreover it seems that the way this dataset is used is to pick a pretrained multilingual model, fine-tune it on the english train set and then evaluate on each test set (one per language).\r\nSo to better fit the usual usage of this dataset, I agree that it's better to have one test split per language. \r\n\r\nSomething like your latest example patrick is fine imo :\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner\", split=\"train\")\r\ntest_dataset_de = load_dataset(\"xglue\", \"ner\", split=\"test.de\")\r\n```\r\n\r\nI just replace test-de with test.de since `-` is not allowed for split names (it has to follow the `\\w+` regex), and usually we specify the language after a point. ",
"Closing since XGLUE has been added in #802 , thanks patrick :) ",
"I need xglue Urdu summarization dataset so how can i get it?",
"According to the table in https://huggingface.co/datasets/xglue, Urdu only exists for POS and XNLI in XGLUE - not for summarization"
] |
https://api.github.com/repos/huggingface/datasets/issues/1498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1498/comments | https://api.github.com/repos/huggingface/datasets/issues/1498/events | https://github.com/huggingface/datasets/pull/1498 | 763,303,606 | MDExOlB1bGxSZXF1ZXN0NTM3Nzc2MjM5 | 1,498 | add stereoset | [] | closed | false | null | 0 | 2020-12-12T05:04:37Z | 2020-12-18T10:03:53Z | 2020-12-18T10:03:53Z | null | StereoSet is a dataset that measures stereotype bias in language models. StereoSet consists of 17,000 sentences that measures model preferences across gender, race, religion, and profession. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1498/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1498/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1498.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1498",
"merged_at": "2020-12-18T10:03:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1498.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1498"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1558 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1558/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1558/comments | https://api.github.com/repos/huggingface/datasets/issues/1558/events | https://github.com/huggingface/datasets/pull/1558 | 765,707,907 | MDExOlB1bGxSZXF1ZXN0NTM5MDQ2MzA4 | 1,558 | Adding Igbo NER data | [] | closed | false | null | 3 | 2020-12-13T23:52:11Z | 2020-12-21T14:38:20Z | 2020-12-21T14:38:20Z | null | This PR adds the Igbo NER dataset.
Data: https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_ner | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1558/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1558/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1558.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1558",
"merged_at": "2020-12-21T14:38:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1558.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1558"
} | true | [
"Thanks for the PR @purvimisal. \r\n\r\nFew comments below. ",
"Hi, @lhoestq Thank you for the review. I have made all the changes. PTAL! ",
"the CI error is not related to your dataset, merging"
] |
https://api.github.com/repos/huggingface/datasets/issues/1470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1470/comments | https://api.github.com/repos/huggingface/datasets/issues/1470/events | https://github.com/huggingface/datasets/pull/1470 | 761,791,065 | MDExOlB1bGxSZXF1ZXN0NTM2NDA2MjQx | 1,470 | Add wiki lingua dataset | [] | closed | false | null | 7 | 2020-12-11T02:04:18Z | 2020-12-16T15:27:13Z | 2020-12-16T15:27:13Z | null | Hello @lhoestq ,
I am opening a fresh pull request as advised in my original PR https://github.com/huggingface/datasets/pull/1308
Thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1470/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1470/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1470.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1470",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1470.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1470"
} | true | [
"it’s failing because of `RemoteDatasetTest.test_load_dataset_orange_sum`\r\nwhich i think is not the dataset you are doing a PR for. Try rebasing with:\r\n```\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit push -u -f origin your_branch\r\n```",
"> it’s failing because of `RemoteDatasetTest.test_load_dataset_orange_sum`\r\n> which i think is not the dataset you are doing a PR for. Try rebasing with:\r\n> \r\n> ```\r\n> git fetch upstream\r\n> git rebase upstream/master\r\n> git push -u -f origin your_branch\r\n> ```\r\n\r\nThanks, my branch seems to be up to date. \r\n```Current branch add-wiki-lingua-dataset is up to date.```",
"Also where do the google drive urls come from ?",
"looks like this PR includes changes about many other files than the ones for wiki_lingua.\r\n\r\nCan you create another branch and another PR ?\r\n(or you can try to fix this branch with rebase and push force if you're familiar with it)",
"Thanks for fixing the dummy data and removing the glob call :) ",
"> looks like this PR includes changes about many other files than the ones for wiki_lingua.\r\n> \r\n> Can you create another branch and another PR ?\r\n> (or you can try to fix this branch with rebase and push force if you're familiar with it)\r\n\r\nEasier to create a new branch and submit, I have submitted a new PR #1582 ",
"Closing this one in favor of #1582 "
] |
https://api.github.com/repos/huggingface/datasets/issues/4513 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4513/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4513/comments | https://api.github.com/repos/huggingface/datasets/issues/4513/events | https://github.com/huggingface/datasets/pull/4513 | 1,273,450,338 | PR_kwDODunzps45xTqv | 4,513 | Update Google Cloud Storage documentation and add Azure Blob Storage example | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 5 | 2022-06-16T11:46:09Z | 2022-06-23T17:05:11Z | 2022-06-23T16:54:59Z | null | While I was going through the 🤗 Datasets documentation of the Cloud storage filesystems at https://huggingface.co/docs/datasets/filesystems, I realized that the Google Cloud Storage documentation could be improved e.g. bullet point says "Load your dataset" when the actual call was to "Save your dataset", in-line code comment was mentioning "s3 bucket" instead of "gcs bucket", and some more in-line comments could be included.
Also, I think that mixing Google Cloud Storage documentation with AWS S3's one was a little bit confusing, so I moved all those to the end of the document under an h2 tab named "Other filesystems", with an h3 for "Google Cloud Storage".
Besides that, I was currently working with Azure Blob Storage and found out that the URL to [adlfs](https://github.com/fsspec/adlfs) was common for both filesystems Azure Blob Storage and Azure DataLake Storage, as well as the URL, which was updated even though the redirect was working fine, so I decided to group those under the same row in the column of supported filesystems.
And took also the change to add a small documentation entry as for Google Cloud Storage but for Azure Blob Storage, as I assume that AWS S3, GCP Cloud Storage, and Azure Blob Storage, are the most used cloud storage providers.
Let me know if you're OK with these changes, or whether you want me to roll back some of those! :hugs: | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4513/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4513/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4513.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4513",
"merged_at": "2022-06-23T16:54:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4513.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4513"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @stevhliu, I've kept the `>>>` before all the in-line code comments as it was done like that in the default S3 example that was already there, I assume that it's done like that just for readiness, let me know whether we should remove the `>>>` in the Python blocks before the in-line code comments or keep them.\r\n\r\n\r\n",
"Comments are ignored by doctest, so I think we can remove the `>>>` :)",
"Cool I'll remove those now 👍🏻",
"Sure @lhoestq, I just kept that structure as that was the more similar one to the one that was already there, but we can go with that approach, just let me know whether I should change the headers so as to leave all those providers in the same level (`h2`). Thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2412/comments | https://api.github.com/repos/huggingface/datasets/issues/2412/events | https://github.com/huggingface/datasets/issues/2412 | 903,769,151 | MDU6SXNzdWU5MDM3NjkxNTE= | 2,412 | Docstring mistake: dataset vs. metric | [] | closed | false | null | 1 | 2021-05-27T13:39:11Z | 2021-06-01T08:18:04Z | 2021-06-01T08:18:04Z | null | This:
https://github.com/huggingface/datasets/blob/d95b95f8cf3cb0cff5f77a675139b584dcfcf719/src/datasets/load.py#L582
Should better be something like:
`a metric identifier on HuggingFace AWS bucket (list all available metrics and ids with ``datasets.list_metrics()``)`
I can provide a PR l8er... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2412/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2412/timeline | null | completed | null | null | false | [
"> I can provide a PR l8er...\r\n\r\nSee #2425 "
] |
https://api.github.com/repos/huggingface/datasets/issues/5472 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5472/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5472/comments | https://api.github.com/repos/huggingface/datasets/issues/5472/events | https://github.com/huggingface/datasets/pull/5472 | 1,558,662,251 | PR_kwDODunzps5Inlp8 | 5,472 | Release: 2.9.0 | [] | closed | false | null | 4 | 2023-01-26T19:29:42Z | 2023-01-26T19:40:44Z | 2023-01-26T19:33:00Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5472/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5472/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5472.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5472",
"merged_at": "2023-01-26T19:33:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5472.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5472"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008578 / 0.011353 (-0.002775) | 0.004535 / 0.011008 (-0.006473) | 0.100694 / 0.038508 (0.062186) | 0.029570 / 0.023109 (0.006460) | 0.296384 / 0.275898 (0.020486) | 0.354405 / 0.323480 (0.030925) | 0.006962 / 0.007986 (-0.001024) | 0.003405 / 0.004328 (-0.000924) | 0.077275 / 0.004250 (0.073025) | 0.036623 / 0.037052 (-0.000429) | 0.309844 / 0.258489 (0.051355) | 0.340343 / 0.293841 (0.046502) | 0.033626 / 0.128546 (-0.094920) | 0.011433 / 0.075646 (-0.064214) | 0.322659 / 0.419271 (-0.096612) | 0.040509 / 0.043533 (-0.003024) | 0.294002 / 0.255139 (0.038863) | 0.323259 / 0.283200 (0.040059) | 0.088023 / 0.141683 (-0.053660) | 1.462039 / 1.452155 (0.009885) | 1.495401 / 1.492716 (0.002684) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218614 / 0.018006 (0.200608) | 0.482359 / 0.000490 (0.481869) | 0.001216 / 0.000200 (0.001016) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023167 / 0.037411 (-0.014245) | 0.098468 / 0.014526 (0.083942) | 0.108273 / 0.176557 (-0.068284) | 0.139991 / 0.737135 (-0.597144) | 0.109032 / 0.296338 (-0.187307) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421526 / 0.215209 (0.206317) | 4.216808 / 2.077655 (2.139153) | 1.860550 / 1.504120 (0.356431) | 1.654518 / 1.541195 (0.113323) | 1.699064 / 1.468490 (0.230574) | 0.691489 / 4.584777 (-3.893287) | 3.401885 / 3.745712 (-0.343827) | 2.792860 / 5.269862 (-2.477001) | 1.516269 / 4.565676 (-3.049408) | 0.081627 / 0.424275 (-0.342648) | 0.012556 / 0.007607 (0.004949) | 0.531535 / 0.226044 (0.305491) | 5.320752 / 2.268929 (3.051823) | 2.314502 / 55.444624 (-53.130123) | 1.967118 / 6.876477 (-4.909359) | 2.008252 / 2.142072 (-0.133821) | 0.809730 / 4.805227 (-3.995497) | 0.148112 / 6.500664 (-6.352552) | 0.064821 / 0.075469 (-0.010648) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.269754 / 1.841788 (-0.572033) | 13.884200 / 8.074308 (5.809892) | 13.914390 / 10.191392 (3.722998) | 0.150176 / 0.680424 (-0.530248) | 0.028463 / 0.534201 (-0.505738) | 0.398723 / 0.579283 (-0.180561) | 0.400433 / 0.434364 (-0.033931) | 0.485169 / 0.540337 (-0.055169) | 0.565995 / 1.386936 (-0.820941) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006479 / 0.011353 (-0.004874) | 0.004504 / 0.011008 (-0.006504) | 0.097905 / 0.038508 (0.059397) | 0.027140 / 0.023109 (0.004031) | 0.408742 / 0.275898 (0.132844) | 0.448707 / 0.323480 (0.125228) | 0.004819 / 0.007986 (-0.003166) | 0.004761 / 0.004328 (0.000433) | 0.075456 / 0.004250 (0.071205) | 0.036282 / 0.037052 (-0.000771) | 0.405961 / 0.258489 (0.147472) | 0.449411 / 0.293841 (0.155570) | 0.031159 / 0.128546 (-0.097387) | 0.011693 / 0.075646 (-0.063954) | 0.321124 / 0.419271 (-0.098147) | 0.041369 / 0.043533 (-0.002164) | 0.408070 / 0.255139 (0.152931) | 0.428704 / 0.283200 (0.145504) | 0.086839 / 0.141683 (-0.054844) | 1.477772 / 1.452155 (0.025617) | 1.555913 / 1.492716 (0.063197) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239494 / 0.018006 (0.221488) | 0.410785 / 0.000490 (0.410295) | 0.000989 / 0.000200 (0.000789) | 0.000072 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023805 / 0.037411 (-0.013607) | 0.097904 / 0.014526 (0.083378) | 0.106437 / 0.176557 (-0.070120) | 0.140555 / 0.737135 (-0.596580) | 0.107169 / 0.296338 (-0.189170) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.470233 / 0.215209 (0.255024) | 4.700451 / 2.077655 (2.622797) | 2.391712 / 1.504120 (0.887592) | 2.191125 / 1.541195 (0.649930) | 2.268924 / 1.468490 (0.800434) | 0.692421 / 4.584777 (-3.892356) | 3.387117 / 3.745712 (-0.358595) | 1.881731 / 5.269862 (-3.388130) | 1.155759 / 4.565676 (-3.409917) | 0.082040 / 0.424275 (-0.342236) | 0.012687 / 0.007607 (0.005080) | 0.567556 / 0.226044 (0.341511) | 5.701408 / 2.268929 (3.432480) | 2.864368 / 55.444624 (-52.580256) | 2.512073 / 6.876477 (-4.364404) | 2.546078 / 2.142072 (0.404005) | 0.795939 / 4.805227 (-4.009288) | 0.150078 / 6.500664 (-6.350586) | 0.067644 / 0.075469 (-0.007825) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281681 / 1.841788 (-0.560107) | 13.967107 / 8.074308 (5.892799) | 13.293648 / 10.191392 (3.102256) | 0.128027 / 0.680424 (-0.552397) | 0.016791 / 0.534201 (-0.517410) | 0.379400 / 0.579283 (-0.199884) | 0.386847 / 0.434364 (-0.047517) | 0.469859 / 0.540337 (-0.070478) | 0.564203 / 1.386936 (-0.822733) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008701 / 0.011353 (-0.002652) | 0.004564 / 0.011008 (-0.006444) | 0.100578 / 0.038508 (0.062070) | 0.029209 / 0.023109 (0.006100) | 0.315308 / 0.275898 (0.039410) | 0.381022 / 0.323480 (0.057542) | 0.007152 / 0.007986 (-0.000834) | 0.003511 / 0.004328 (-0.000817) | 0.078361 / 0.004250 (0.074110) | 0.035394 / 0.037052 (-0.001658) | 0.331076 / 0.258489 (0.072586) | 0.366613 / 0.293841 (0.072772) | 0.033466 / 0.128546 (-0.095080) | 0.011521 / 0.075646 (-0.064126) | 0.322178 / 0.419271 (-0.097093) | 0.040891 / 0.043533 (-0.002641) | 0.320418 / 0.255139 (0.065279) | 0.345199 / 0.283200 (0.062000) | 0.087906 / 0.141683 (-0.053777) | 1.476801 / 1.452155 (0.024646) | 1.497738 / 1.492716 (0.005022) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178094 / 0.018006 (0.160087) | 0.408317 / 0.000490 (0.407827) | 0.001825 / 0.000200 (0.001625) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022402 / 0.037411 (-0.015010) | 0.097104 / 0.014526 (0.082578) | 0.105361 / 0.176557 (-0.071196) | 0.139728 / 0.737135 (-0.597407) | 0.109613 / 0.296338 (-0.186725) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418245 / 0.215209 (0.203036) | 4.155655 / 2.077655 (2.078000) | 1.865892 / 1.504120 (0.361772) | 1.659003 / 1.541195 (0.117809) | 1.725649 / 1.468490 (0.257159) | 0.688733 / 4.584777 (-3.896044) | 3.323529 / 3.745712 (-0.422184) | 1.867807 / 5.269862 (-3.402054) | 1.157740 / 4.565676 (-3.407936) | 0.081947 / 0.424275 (-0.342329) | 0.012471 / 0.007607 (0.004864) | 0.529333 / 0.226044 (0.303288) | 5.284898 / 2.268929 (3.015970) | 2.321741 / 55.444624 (-53.122883) | 1.975683 / 6.876477 (-4.900794) | 2.029691 / 2.142072 (-0.112381) | 0.810212 / 4.805227 (-3.995015) | 0.148185 / 6.500664 (-6.352479) | 0.064594 / 0.075469 (-0.010875) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.183391 / 1.841788 (-0.658396) | 13.574760 / 8.074308 (5.500452) | 14.215015 / 10.191392 (4.023623) | 0.150776 / 0.680424 (-0.529648) | 0.029058 / 0.534201 (-0.505143) | 0.404071 / 0.579283 (-0.175212) | 0.401289 / 0.434364 (-0.033075) | 0.490946 / 0.540337 (-0.049392) | 0.582292 / 1.386936 (-0.804644) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006695 / 0.011353 (-0.004658) | 0.004499 / 0.011008 (-0.006510) | 0.097633 / 0.038508 (0.059125) | 0.027606 / 0.023109 (0.004496) | 0.413191 / 0.275898 (0.137293) | 0.441896 / 0.323480 (0.118416) | 0.005703 / 0.007986 (-0.002283) | 0.004608 / 0.004328 (0.000280) | 0.074392 / 0.004250 (0.070141) | 0.037966 / 0.037052 (0.000913) | 0.410736 / 0.258489 (0.152247) | 0.448581 / 0.293841 (0.154740) | 0.031594 / 0.128546 (-0.096952) | 0.011597 / 0.075646 (-0.064049) | 0.319632 / 0.419271 (-0.099639) | 0.041189 / 0.043533 (-0.002343) | 0.407120 / 0.255139 (0.151981) | 0.433416 / 0.283200 (0.150216) | 0.089932 / 0.141683 (-0.051751) | 1.453919 / 1.452155 (0.001764) | 1.545892 / 1.492716 (0.053176) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224302 / 0.018006 (0.206296) | 0.415519 / 0.000490 (0.415029) | 0.000407 / 0.000200 (0.000207) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024104 / 0.037411 (-0.013307) | 0.098202 / 0.014526 (0.083676) | 0.106416 / 0.176557 (-0.070140) | 0.141090 / 0.737135 (-0.596045) | 0.110188 / 0.296338 (-0.186150) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478252 / 0.215209 (0.263043) | 4.739684 / 2.077655 (2.662029) | 2.419040 / 1.504120 (0.914920) | 2.217705 / 1.541195 (0.676510) | 2.303288 / 1.468490 (0.834798) | 0.696682 / 4.584777 (-3.888095) | 3.401962 / 3.745712 (-0.343750) | 1.886015 / 5.269862 (-3.383846) | 1.175084 / 4.565676 (-3.390592) | 0.083064 / 0.424275 (-0.341211) | 0.012613 / 0.007607 (0.005006) | 0.579105 / 0.226044 (0.353060) | 5.792119 / 2.268929 (3.523191) | 2.889778 / 55.444624 (-52.554846) | 2.537438 / 6.876477 (-4.339039) | 2.574814 / 2.142072 (0.432741) | 0.803438 / 4.805227 (-4.001789) | 0.151912 / 6.500664 (-6.348752) | 0.068291 / 0.075469 (-0.007178) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.286002 / 1.841788 (-0.555786) | 14.179443 / 8.074308 (6.105135) | 13.443939 / 10.191392 (3.252547) | 0.152427 / 0.680424 (-0.527996) | 0.017248 / 0.534201 (-0.516953) | 0.378734 / 0.579283 (-0.200549) | 0.382276 / 0.434364 (-0.052087) | 0.465323 / 0.540337 (-0.075014) | 0.556454 / 1.386936 (-0.830482) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008675 / 0.011353 (-0.002678) | 0.004537 / 0.011008 (-0.006471) | 0.100179 / 0.038508 (0.061671) | 0.029307 / 0.023109 (0.006198) | 0.294687 / 0.275898 (0.018789) | 0.356868 / 0.323480 (0.033388) | 0.006992 / 0.007986 (-0.000994) | 0.003380 / 0.004328 (-0.000949) | 0.076961 / 0.004250 (0.072710) | 0.036047 / 0.037052 (-0.001005) | 0.308037 / 0.258489 (0.049548) | 0.341089 / 0.293841 (0.047248) | 0.033416 / 0.128546 (-0.095131) | 0.011534 / 0.075646 (-0.064112) | 0.322976 / 0.419271 (-0.096296) | 0.040894 / 0.043533 (-0.002639) | 0.296501 / 0.255139 (0.041362) | 0.324605 / 0.283200 (0.041405) | 0.086713 / 0.141683 (-0.054970) | 1.502784 / 1.452155 (0.050630) | 1.535013 / 1.492716 (0.042297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.186647 / 0.018006 (0.168641) | 0.411003 / 0.000490 (0.410514) | 0.003594 / 0.000200 (0.003394) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023704 / 0.037411 (-0.013707) | 0.096154 / 0.014526 (0.081629) | 0.103671 / 0.176557 (-0.072885) | 0.138878 / 0.737135 (-0.598258) | 0.106947 / 0.296338 (-0.189391) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417180 / 0.215209 (0.201970) | 4.149579 / 2.077655 (2.071925) | 1.865763 / 1.504120 (0.361643) | 1.669722 / 1.541195 (0.128527) | 1.722345 / 1.468490 (0.253855) | 0.695910 / 4.584777 (-3.888867) | 3.342266 / 3.745712 (-0.403446) | 1.884568 / 5.269862 (-3.385294) | 1.265013 / 4.565676 (-3.300664) | 0.081836 / 0.424275 (-0.342439) | 0.012371 / 0.007607 (0.004764) | 0.522997 / 0.226044 (0.296953) | 5.225434 / 2.268929 (2.956506) | 2.304701 / 55.444624 (-53.139924) | 1.949067 / 6.876477 (-4.927410) | 2.016347 / 2.142072 (-0.125725) | 0.809850 / 4.805227 (-3.995377) | 0.148396 / 6.500664 (-6.352268) | 0.063340 / 0.075469 (-0.012129) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224621 / 1.841788 (-0.617167) | 13.814223 / 8.074308 (5.739915) | 13.879728 / 10.191392 (3.688336) | 0.149530 / 0.680424 (-0.530894) | 0.028439 / 0.534201 (-0.505762) | 0.392726 / 0.579283 (-0.186557) | 0.396894 / 0.434364 (-0.037469) | 0.474395 / 0.540337 (-0.065943) | 0.569090 / 1.386936 (-0.817847) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006483 / 0.011353 (-0.004870) | 0.004527 / 0.011008 (-0.006481) | 0.098038 / 0.038508 (0.059530) | 0.027239 / 0.023109 (0.004130) | 0.441773 / 0.275898 (0.165875) | 0.471448 / 0.323480 (0.147968) | 0.005034 / 0.007986 (-0.002951) | 0.004732 / 0.004328 (0.000403) | 0.075036 / 0.004250 (0.070785) | 0.036711 / 0.037052 (-0.000341) | 0.442634 / 0.258489 (0.184145) | 0.476479 / 0.293841 (0.182638) | 0.031303 / 0.128546 (-0.097243) | 0.011642 / 0.075646 (-0.064005) | 0.320750 / 0.419271 (-0.098521) | 0.048698 / 0.043533 (0.005165) | 0.441205 / 0.255139 (0.186066) | 0.464845 / 0.283200 (0.181645) | 0.092716 / 0.141683 (-0.048967) | 1.510028 / 1.452155 (0.057874) | 1.574065 / 1.492716 (0.081349) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220756 / 0.018006 (0.202750) | 0.393971 / 0.000490 (0.393482) | 0.002506 / 0.000200 (0.002306) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024455 / 0.037411 (-0.012956) | 0.100164 / 0.014526 (0.085638) | 0.108053 / 0.176557 (-0.068504) | 0.142973 / 0.737135 (-0.594163) | 0.110108 / 0.296338 (-0.186231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.473639 / 0.215209 (0.258430) | 4.737521 / 2.077655 (2.659866) | 2.466208 / 1.504120 (0.962088) | 2.272608 / 1.541195 (0.731413) | 2.349255 / 1.468490 (0.880764) | 0.699928 / 4.584777 (-3.884849) | 3.348443 / 3.745712 (-0.397269) | 2.604611 / 5.269862 (-2.665250) | 1.543080 / 4.565676 (-3.022597) | 0.082627 / 0.424275 (-0.341648) | 0.012251 / 0.007607 (0.004644) | 0.569949 / 0.226044 (0.343905) | 5.732316 / 2.268929 (3.463388) | 2.913541 / 55.444624 (-52.531084) | 2.560584 / 6.876477 (-4.315892) | 2.615192 / 2.142072 (0.473120) | 0.803822 / 4.805227 (-4.001406) | 0.150821 / 6.500664 (-6.349843) | 0.067128 / 0.075469 (-0.008341) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272278 / 1.841788 (-0.569510) | 13.783339 / 8.074308 (5.709030) | 13.243601 / 10.191392 (3.052209) | 0.136421 / 0.680424 (-0.544003) | 0.016565 / 0.534201 (-0.517636) | 0.381102 / 0.579283 (-0.198181) | 0.386166 / 0.434364 (-0.048197) | 0.474249 / 0.540337 (-0.066089) | 0.566826 / 1.386936 (-0.820110) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4194/comments | https://api.github.com/repos/huggingface/datasets/issues/4194/events | https://github.com/huggingface/datasets/pull/4194 | 1,210,958,602 | PR_kwDODunzps42jjD3 | 4,194 | Support lists of multi-dimensional numpy arrays | [] | closed | false | null | 1 | 2022-04-21T12:22:26Z | 2022-05-12T15:16:34Z | 2022-05-12T15:08:40Z | null | Fix #4191.
CC: @SaulLu | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4194/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4194/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4194.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4194",
"merged_at": "2022-05-12T15:08:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4194.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4194"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2459/comments | https://api.github.com/repos/huggingface/datasets/issues/2459/events | https://github.com/huggingface/datasets/issues/2459 | 915,222,015 | MDU6SXNzdWU5MTUyMjIwMTU= | 2,459 | `Proto_qa` hosting seems to be broken | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-06-08T16:16:32Z | 2021-06-10T08:31:09Z | 2021-06-10T08:31:09Z | null | ## Describe the bug
The hosting (on Github) of the `proto_qa` dataset seems broken. I haven't investigated more yet, just flagging it for now.
@zaidalyafeai if you want to dive into it, I think it's just a matter of changing the links in `proto_qa.py`
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("proto_qa")
```
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/load.py", line 751, in load_dataset
use_auth_token=use_auth_token,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 630, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/hf/.cache/huggingface/modules/datasets_modules/datasets/proto_qa/445346efaad5c5f200ecda4aa7f0fb50ff1b55edde3003be424a2112c3e8102e/proto_qa.py", line 131, in _split_generators
train_fpath = dl_manager.download(_URLs[self.config.name]["train"])
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 199, in download
num_proc=download_config.num_proc,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 195, in map_nested
return function(data_struct)
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 218, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/iesl/protoqa-data/master/data/train/protoqa_train.jsonl
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2459/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2459/timeline | null | completed | null | null | false | [
"@VictorSanh , I think @mariosasko is already working on it. "
] |
https://api.github.com/repos/huggingface/datasets/issues/1157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1157/comments | https://api.github.com/repos/huggingface/datasets/issues/1157/events | https://github.com/huggingface/datasets/pull/1157 | 757,657,888 | MDExOlB1bGxSZXF1ZXN0NTMzMDAwNDQy | 1,157 | Add dataset XhosaNavy English -Xhosa | [] | closed | false | null | 0 | 2020-12-05T11:19:54Z | 2020-12-07T09:11:33Z | 2020-12-07T09:11:33Z | null | Add dataset XhosaNavy English -Xhosa
More info : http://opus.nlpl.eu/XhosaNavy.php | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1157/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1157/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1157.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1157",
"merged_at": "2020-12-07T09:11:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1157.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1157"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6020/comments | https://api.github.com/repos/huggingface/datasets/issues/6020/events | https://github.com/huggingface/datasets/issues/6020 | 1,799,720,536 | I_kwDODunzps5rRY5Y | 6,020 | Inconsistent "The features can't be aligned" error when combining map, multiprocessing, and variable length outputs | [] | open | false | null | 1 | 2023-07-11T20:40:38Z | 2023-07-12T15:58:24Z | null | null | ### Describe the bug
I'm using a dataset with map and multiprocessing to run a function that returned a variable length list of outputs. This output list may be empty. Normally this is handled fine, but there is an edge case that crops up when using multiprocessing. In some cases, an empty list result ends up in a dataset shard consisting of a single item. This results in a `The features can't be aligned` error that is difficult to debug because it depends on the number of processes/shards used.
I've reproduced a minimal example below. My current workaround is to fill empty results with a dummy value that I filter after, but this was a weird error that took a while to track down.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_list([{'idx':i} for i in range(60)])
def test_func(row, idx):
if idx==58:
return {'output': []}
else:
return {'output' : [{'test':1}, {'test':2}]}
# this works fine
test1 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=4)
# this fails
test2 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=32)
>ValueError: The features can't be aligned because the key output of features {'idx': Value(dtype='int64', id=None), 'output': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None)} has unexpected type - Sequence(feature=Value(dtype='null', id=None), length=-1, id=None) (expected either [{'test': Value(dtype='int64', id=None)}] or Value("null").
```
The error occurs during the check
```python
_check_if_features_can_be_aligned([dset.features for dset in dsets])
```
When the multiprocessing splitting lines up just right with the empty return value, one of the `dset` in `dsets` will have a single item with an empty list value, causing the error.
### Expected behavior
Expected behavior is the result would be the same regardless of the `num_proc` value used.
### Environment info
Datasets version 2.11.0
Python 3.9.16 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6020/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6020/timeline | null | null | null | null | false | [
"This scenario currently requires explicitly passing the target features (to avoid the error): \r\n```python\r\nimport datasets\r\n\r\n...\r\n\r\nfeatures = dataset.features\r\nfeatures[\"output\"] = = [{\"test\": datasets.Value(\"int64\")}]\r\ntest2 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=32, features=features)\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/3426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3426/comments | https://api.github.com/repos/huggingface/datasets/issues/3426/events | https://github.com/huggingface/datasets/pull/3426 | 1,078,670,031 | PR_kwDODunzps4vxEN5 | 3,426 | Update disaster_response_messages download urls (+ add validation split) | [] | closed | false | null | 0 | 2021-12-13T15:30:12Z | 2021-12-14T14:38:30Z | 2021-12-14T14:38:29Z | null | Fixes #3240, fixes #3416 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3426/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3426/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3426.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3426",
"merged_at": "2021-12-14T14:38:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3426.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3426"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5967/comments | https://api.github.com/repos/huggingface/datasets/issues/5967/events | https://github.com/huggingface/datasets/issues/5967 | 1,763,926,520 | I_kwDODunzps5pI2H4 | 5,967 | Config name / split name lost after map with multiproc | [] | open | false | null | 2 | 2023-06-19T17:27:36Z | 2023-06-28T08:55:25Z | null | null | ### Describe the bug
Performing a `.map` method on a dataset loses it's config name / split name only if run with multiproc
### Steps to reproduce the bug
```python
from datasets import Audio, load_dataset
from transformers import AutoFeatureExtractor
import numpy as np
# load dummy dataset
libri = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean")
# make train / test splits
libri = libri["validation"].train_test_split(seed=42, shuffle=True, test_size=0.1)
# example feature extractor
model_id = "ntu-spml/distilhubert"
feature_extractor = AutoFeatureExtractor.from_pretrained(model_id, do_normalize=True, return_attention_mask=True)
sampling_rate = feature_extractor.sampling_rate
libri = libri.cast_column("audio", Audio(sampling_rate=sampling_rate))
max_duration = 30.0
def preprocess_function(examples):
audio_arrays = [x["array"] for x in examples["audio"]]
inputs = feature_extractor(
audio_arrays,
sampling_rate=feature_extractor.sampling_rate,
max_length=int(feature_extractor.sampling_rate * max_duration),
truncation=True,
return_attention_mask=True,
)
return inputs
# single proc map
libri_encoded = libri.map(
preprocess_function, remove_columns=["audio", "file"], batched=True, num_proc=1
)
print(10 * "=" ,"Single processing", 10 * "=")
print("Config name before: ", libri["train"].config_name, " Split name before: ", libri["train"].split)
print("Config name after: ", libri_encoded["train"].config_name, " Split name after: ", libri_encoded["train"].split)
# multi proc map
libri_encoded = libri.map(
preprocess_function, remove_columns=["audio", "file"], batched=True, num_proc=2
)
print(10 * "=" ,"Multi processing", 10 * "=")
print("Config name before: ", libri["train"].config_name, " Split name before: ", libri["train"].split)
print("Config name after: ", libri_encoded["train"].config_name, " Split name after: ", libri_encoded["train"].split)
```
**Print Output:**
```
========== Single processing ==========
Config name before: clean Split name before: validation
Config name after: clean Split name after: validation
========== Multi processing ==========
Config name before: clean Split name before: validation
Config name after: None Split name after: None
```
=> we can see that the config/split names are lost in the multiprocessing setting
### Expected behavior
Should retain both config / split names in the multiproc setting
### Environment info
- `datasets` version: 2.13.1.dev0
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5967/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5967/timeline | null | null | null | null | false | [
"This must be due to DatasetInfo.from_merge which drops them and is used in `concatenate_datasets`.\r\n\r\nAnd you're experiencing this issue because multiprocessing does concatenate the resulting datasets from each process.\r\n\r\nMaybe they should be kept if all the subdatasets share the same values for config_name and split",
"That sounds like a clean workaround!"
] |
https://api.github.com/repos/huggingface/datasets/issues/566 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/566/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/566/comments | https://api.github.com/repos/huggingface/datasets/issues/566/events | https://github.com/huggingface/datasets/pull/566 | 691,160,208 | MDExOlB1bGxSZXF1ZXN0NDc3OTM2NTIz | 566 | Remove logger pickling to fix gg colab issues | [] | closed | false | null | 0 | 2020-09-02T16:16:21Z | 2020-09-03T16:31:53Z | 2020-09-03T16:31:52Z | null | A `logger` objects are not picklable in google colab, contrary to `logger` objects in jupyter notebooks or in python shells.
It creates some issues in google colab right now.
Indeed by calling any `Dataset` method, the fingerprint update pickles the transform function, and as the logger comes with it, it results in an error (full stacktrace [here](http://pastebin.fr/64330)):
```python
/usr/local/lib/python3.6/dist-packages/zmq/backend/cython/socket.cpython-36m-x86_64-linux-gnu.so in zmq.backend.cython.socket.Socket.__reduce_cython__()
TypeError: no default __reduce__ due to non-trivial __cinit__
```
To fix that I no longer dump the transform (`_map_single`, `select`, etc.), but the full name only (`nlp.arrow_dataset.Dataset._map_single`, `nlp.arrow_dataset.Dataset.select`, etc.) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/566/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/566/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/566.diff",
"html_url": "https://github.com/huggingface/datasets/pull/566",
"merged_at": "2020-09-03T16:31:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/566.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/566"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3191/comments | https://api.github.com/repos/huggingface/datasets/issues/3191/events | https://github.com/huggingface/datasets/issues/3191 | 1,041,225,111 | I_kwDODunzps4-D9WX | 3,191 | Dataset viewer issue for '*compguesswhat*' | [
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | 4 | 2021-11-01T14:16:49Z | 2022-09-12T08:02:29Z | 2022-09-12T08:02:29Z | null | ## Dataset viewer issue for '*compguesswhat*'
**Link:** https://huggingface.co/datasets/compguesswhat
File not found
Am I the one who added this dataset ? No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3191/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3191/timeline | null | completed | null | null | false | [
"```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('compguesswhat', name='compguesswhat-original',split='train', streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/compguesswhat/4d08b9e0a8d1cf036c9626c93be4a759fdd9fcce050ea503ea14b075e830c799/compguesswhat.py\", line 251, in _generate_examples\r\n with gzip.open(filepath) as in_file:\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/gzip.py\", line 58, in open\r\n binary_file = GzipFile(filename, gz_mode, compresslevel)\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/gzip.py\", line 173, in __init__\r\n fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')\r\nFileNotFoundError: [Errno 2] No such file or directory: 'zip://compguesswhat-original/0.2.0/compguesswhat.train.jsonl.gz::https://www.dropbox.com/s/l0nc13udml6vs0w/compguesswhat-original.zip?dl=1'\r\n```\r\n\r\nIt's an issue with the streaming mode. Note that normal mode is used by the dataset viewer when streaming is failing, but only for the smallest datasets. This dataset is above the limit, hence the error.\r\n\r\nSame case as https://github.com/huggingface/datasets/issues/3186#issuecomment-1096549774.",
"cc @huggingface/datasets ",
"There is an issue with the URLs of their data files: https://www.dropbox.com/s/l0nc13udml6vs0w/compguesswhat-original.zip?dl=1\r\n> Dropbox Error: That didn't work for some reason\r\n\r\nError reported to their repo:\r\n- https://github.com/CompGuessWhat/compguesswhat.github.io/issues/1",
"Closed by:\r\n- #4968"
] |
https://api.github.com/repos/huggingface/datasets/issues/5201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5201/comments | https://api.github.com/repos/huggingface/datasets/issues/5201/events | https://github.com/huggingface/datasets/pull/5201 | 1,435,881,554 | PR_kwDODunzps5CM0zn | 5,201 | Do not sort splits in dataset info | [] | closed | false | null | 5 | 2022-11-04T10:47:21Z | 2022-11-04T14:47:37Z | 2022-11-04T14:45:09Z | null | I suggest not to sort splits by their names in dataset_info in README so that they are displayed in the order specified in the loading script. Otherwise `test` split is displayed first, see this repo: https://huggingface.co/datasets/paws
What do you think?
But I added sorting in tests to fix CI (for the same dataset). | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5201/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5201/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5201.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5201",
"merged_at": "2022-11-04T14:45:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5201.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5201"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"It would be coherent with https://github.com/huggingface/datasets-server/issues/614#issuecomment-1290534153",
"I think we started working on this issue nearly at the same time... :sweat_smile: \r\n- CI was fixed with this: https://huggingface.co/datasets/paws/discussions/1\r\n\r\nRelated issue:\r\n- #5202",
"@albertvillanova yeah I noticed it right after the PR :smile: thank you! the fix of the dataset info yaml fixes tests on CI, but in general order of splits in yaml influences the order in which they are displayed in the viewer, if I understand it correctly. So I suggest not to sort splits in yaml initially to avoid this for other datasets in the future. I think [this change](https://github.com/huggingface/datasets/pull/5201/files#diff-198ba4fdf2f94cb3e1aba8a0170a43b08d4ab5636d682374321c5a383a8be24dR571) should work for it. \r\n\r\nChanges to tests here maybe can be reverted considering that order in yaml now corresponds to the one in tests, thanks to your change in the dataset info.",
"Hehe, @polinaeterna, we make comments nearly at the same time as well... :laughing: "
] |
https://api.github.com/repos/huggingface/datasets/issues/1113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1113/comments | https://api.github.com/repos/huggingface/datasets/issues/1113/events | https://github.com/huggingface/datasets/pull/1113 | 757,115,557 | MDExOlB1bGxSZXF1ZXN0NTMyNTQ1Mzg2 | 1,113 | add qed | [] | closed | false | null | 0 | 2020-12-04T13:47:57Z | 2020-12-05T15:46:21Z | 2020-12-05T15:41:57Z | null | adding QED: Dataset for Explanations in Question Answering
https://github.com/google-research-datasets/QED
https://arxiv.org/abs/2009.06354 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1113/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1113/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1113.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1113",
"merged_at": "2020-12-05T15:41:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1113.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1113"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2775/comments | https://api.github.com/repos/huggingface/datasets/issues/2775/events | https://github.com/huggingface/datasets/issues/2775 | 964,303,626 | MDU6SXNzdWU5NjQzMDM2MjY= | 2,775 | `generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 3 | 2021-08-09T19:28:51Z | 2021-08-26T08:30:54Z | null | null | ## Describe the bug
**Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still a bug, because `datasets` is used exactly this way in `transformers` after `set_seed()` has been called, and I think that using `set_seed()` is a standard procedure to aid reproducibility. I've added more details to reproduce this below.
Hi there! I'm using my own local dataset and custom preprocessing function. My preprocessing function seems to be unpickle-able, perhaps because it is from a closure (will debug this separately). I get this warning, which is expected:
https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L260-L265
However, what's not expected is that the `datasets` actually _does_ seem to cache and reuse this dataset between runs! After that line, the next thing that's logged looks like:
```text
Loading cached processed dataset at /home/xxx/.cache/huggingface/datasets/csv/default-xxx/0.0.0/xxx/cache-xxx.arrow
```
The path is exactly the same each run (e.g., last 26 runs).
This becomes a problem because I'll pass in the `--max_eval_samples` flag to the HuggingFace example script I'm running off of ([run_swag.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/multiple-choice/run_swag.py)). The fact that the cached dataset is reused means this flag gets ignored. I'll try to load 100 examples, and it will load the full cached 1,000,000.
I think that
https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L248
... is actually consistent because randomness is being controlled in HuggingFace/Transformers for reproducibility. I've added a demo of this below.
## Steps to reproduce the bug
```python
# Contents of print_fingerprint.py
from transformers import set_seed
from datasets.fingerprint import generate_random_fingerprint
set_seed(42)
print(generate_random_fingerprint())
```
```bash
for i in {0..10}; do
python print_fingerprint.py
done
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
```
## Expected results
After the "random hash" warning is emitted, a random hash is generated, and no outdated cached datasets are reused.
## Actual results
After the "random hash" warning is emitted, an identical hash is generated each time, and an outdated cached dataset is reused each run.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Linux-5.8.0-1038-gcp-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyArrow version: 4.0.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2775/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2775/timeline | null | null | null | null | false | [
"I dug into what I believe is the root of this issue and added a repro in my comment. If this is better addressed as a cross-team issue, let me know and I can open an issue in the Transformers repo",
"Hi !\r\n\r\nIMO we shouldn't try to modify `set_seed` from transformers but maybe make `datasets` have its own RNG just to generate random fingerprints.\r\n\r\nAny opinion on this @LysandreJik ?",
"Yes, this sounds good @lhoestq "
] |
https://api.github.com/repos/huggingface/datasets/issues/5035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5035/comments | https://api.github.com/repos/huggingface/datasets/issues/5035/events | https://github.com/huggingface/datasets/pull/5035 | 1,388,914,476 | PR_kwDODunzps4_wVie | 5,035 | Fix typos in load docstrings and comments | [] | closed | false | null | 1 | 2022-09-28T08:05:07Z | 2022-09-28T17:28:40Z | 2022-09-28T17:26:15Z | null | Minor fix of typos in load docstrings and comments | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5035/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5035/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5035.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5035",
"merged_at": "2022-09-28T17:26:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5035.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5035"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2416 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2416/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2416/comments | https://api.github.com/repos/huggingface/datasets/issues/2416/events | https://github.com/huggingface/datasets/pull/2416 | 903,932,299 | MDExOlB1bGxSZXF1ZXN0NjU1MTM3NDUy | 2,416 | Add KLUE dataset | [] | closed | false | null | 7 | 2021-05-27T15:49:51Z | 2021-06-09T15:00:02Z | 2021-06-04T17:45:15Z | null | Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2416/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2416/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2416.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2416",
"merged_at": "2021-06-04T17:45:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2416.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2416"
} | true | [
"I'm not sure why I got error like below when I auto-generate dummy data \"mrc\" \r\n```\r\ndatasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 0\r\nKeys should be unique and deterministic in nature\r\n```",
"> I'm not sure why I got error like below when I auto-generate dummy data \"mrc\"\r\n> \r\n> ```\r\n> datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\n> Found duplicate Key: 0\r\n> Keys should be unique and deterministic in nature\r\n> ```\r\n\r\nPlease check out the suggestion below. I think it might be a cause.",
"> > I'm not sure why I got error like below when I auto-generate dummy data \"mrc\"\r\n> > ```\r\n> > datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\n> > Found duplicate Key: 0\r\n> > Keys should be unique and deterministic in nature\r\n> > ```\r\n> \r\n> Please check out the suggestion below. I think it might be a cause.\r\n\r\nThe problem was `id_` in mrc when yield was not unique. (I used index in `enumerate(paragraphs)` by mistake)\r\nI fixed it and update all the things",
"To fix the CI you can just merge master into your branch and it should be all green hopefully :)",
"@lhoestq\r\nThanks for reviewing!\r\n\r\nIt's harder than I thought to add dataset card. 😅 \r\nI checked and updated your suggestion (script, readme details, dummy data). \r\n\r\ndummy data is little bit larger than expected because `ner` dataset is about 80 lines and `dp` dataset is about 25 lines to avoid 0 examples.\r\n\r\nI'm not sure why some CI keep fails, can u check for this?",
"Thanks ! That makes sense for ner and dp\r\n\r\nFor mrc on the other hand there are still too many examples, maybe you can generate the dummy data for 5 examples for all tasks except ner and dp ?",
"> Thanks ! That makes sense for ner and dp\r\n> \r\n> For mrc on the other hand there are still too many examples, maybe you can generate the dummy data for 5 examples for all tasks except ner and dp ?\r\n\r\nYes, I generate default lines in dataset-cli for other dataset except \"dp\" and \"ner\"\r\nI fixed mrc dataset, hope it's fine now :)\r\n\r\nthe reason CI failed was I forgot to merge master into my branch 😅 "
] |
https://api.github.com/repos/huggingface/datasets/issues/1915 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1915/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1915/comments | https://api.github.com/repos/huggingface/datasets/issues/1915/events | https://github.com/huggingface/datasets/issues/1915 | 812,229,654 | MDU6SXNzdWU4MTIyMjk2NTQ= | 1,915 | Unable to download `wiki_dpr` | [] | closed | false | null | 3 | 2021-02-19T18:11:32Z | 2021-03-03T17:40:48Z | 2021-03-03T17:40:48Z | null | I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings/no index. In order to do so, I ran:
`curr_dataset = load_dataset("wiki_dpr", embeddings_name="multiset", index_name="no_index")`
However, I got the following error:
`datasets.utils.info_utils.UnexpectedDownloadedFile: {'embeddings_index'}`
I tried adding in flags `with_embeddings=False` and `with_index=False`:
`curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False, embeddings_name="multiset", index_name="no_index")`
But I got the following error:
`raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))
datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_5’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_15’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_30’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_36’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_18’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_41’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_13’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_48’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_10’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_23’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_14’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_34’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_43’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_40’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_47’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_3’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_24’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_7’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_33’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_46’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_42’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_27’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_29’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_26’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_22’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_4’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_20’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_39’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_6’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_16’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_8’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_35’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_49’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_17’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_25’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_0’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_38’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_12’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_44’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_1’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_32’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_19’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_31’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_37’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_9’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_11’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_21’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_28’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_45’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_2’}`
Is there anything else I need to set to download the dataset?
**UPDATE**: just running `curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False)` gives me the same error.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1915/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1915/timeline | null | completed | null | null | false | [
"Thanks for reporting ! This is a bug. For now feel free to set `ignore_verifications=False` in `load_dataset`.\r\nI'm working on a fix",
"I just merged a fix :)\r\n\r\nWe'll do a patch release soon. In the meantime feel free to try it from the master branch\r\nThanks again for reporting !",
"Closing since this has been fixed by #1925"
] |
https://api.github.com/repos/huggingface/datasets/issues/3375 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3375/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3375/comments | https://api.github.com/repos/huggingface/datasets/issues/3375/events | https://github.com/huggingface/datasets/pull/3375 | 1,070,454,913 | PR_kwDODunzps4vWrXz | 3,375 | Support streaming zipped dataset repo by passing only repo name | [] | closed | false | null | 6 | 2021-12-03T10:43:05Z | 2021-12-16T18:03:32Z | 2021-12-16T18:03:31Z | null | Proposed solution:
- I have added the method `iter_files` to DownloadManager and StreamingDownloadManager
- I use this in modules: "csv", "json", "text"
- I test for CSV/JSONL/TXT zipped (and non-zipped) files, both in streaming and non-streaming modes
Fix #3373. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3375/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3375/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3375.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3375",
"merged_at": "2021-12-16T18:03:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3375.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3375"
} | true | [
"I just tested and I think this only opens one file ? If there are several files in the ZIP, only the first one is opened. To open several files from a ZIP, one has to call `open` several times.\r\n\r\nWhat about updating the CSV loader to make it `download_and_extract` zip files, and open each extracted file ?",
"I have implemented the glob of ZIP files in the packaged modules:\r\n- csv\r\n- json\r\n- text",
"Also for streaming and non-streaming.",
"In https://github.com/huggingface/datasets/pull/3375/commits/c10275fe36085601cb7bdb9daee9a8f1fc734f48, there were 3 failing tests, only on Linux:\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_streaming_download_manager.py::test_streaming_dl_manager_get_extraction_protocol[https://drive.google.com/uc?export=download&id=1k92sUfpHxKq8PXWRr7Y5aNHXwOCNUmqh-zip]\r\nFAILED tests/test_streaming_download_manager.py::test_streaming_gg_drive - Fi...\r\nFAILED tests/test_streaming_download_manager.py::test_streaming_gg_drive_zipped\r\n= 3 failed, 3553 passed, 2950 skipped, 2 xfailed, 1 xpassed, 125 warnings in 192.79s (0:03:12) =\r\n```\r\n\r\nAfter re-running the CI in https://github.com/huggingface/datasets/pull/3375/commits/57bfe1f342cd3c59d2510b992d5f06a0761eb147, there was only 1 failing test:\r\n- On Linux:\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_streaming_download_manager.py::test_streaming_gg_drive_zipped\r\n= 1 failed, 3555 passed, 2950 skipped, 2 xfailed, 1 xpassed, 125 warnings in 199.76s (0:03:19) =\r\n```\r\n- On Windows:\r\n```\r\n=========================== short test summary info ===========================\r\nFAILED tests/test_load.py::test_load_dataset_builder_for_community_dataset_without_script\r\n= 1 failed, 3551 passed, 2954 skipped, 2 xfailed, 1 xpassed, 121 warnings in 478.58s (0:07:58) =\r\n```\r\n\r\nThe test `tests/test_streaming_download_manager.py::test_streaming_gg_drive_zipped` passes locally.\r\n\r\nI guess the issue is caused by those tests and has nothing to do with this PR.",
"@lhoestq my final proposed solution:\r\n- I have added the method `iter_files` to DownloadManager and StreamingDownloadManager\r\n- I use this in modules: \"csv\", \"json\", \"text\"\r\n- I test for CSV/JSONL/TXT zipped (and non-zipped) files, both in streaming and non-streaming modes",
"> Note that at one point we might consider switching to using `iter_archive` for ZIP files in the json/text/csv loaders since it should be faster.\r\n\r\nAs far as the functionality is kept... ;)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3143/comments | https://api.github.com/repos/huggingface/datasets/issues/3143/events | https://github.com/huggingface/datasets/issues/3143 | 1,033,569,655 | I_kwDODunzps49mwV3 | 3,143 | Provide a way to check if the features (in info) match with the data of a split | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | open | false | null | 1 | 2021-10-22T13:13:36Z | 2021-10-22T13:17:56Z | null | null | **Is your feature request related to a problem? Please describe.**
I understand that currently the data loaded has not always the type described in the info features
**Describe the solution you'd like**
Provide a way to check if the rows have the type described by info features
**Describe alternatives you've considered**
Always check it, and raise an error when loading the data if their type doesn't match the features.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3143/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3143/timeline | null | null | null | null | false | [
"Related: #3144 "
] |
https://api.github.com/repos/huggingface/datasets/issues/1164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1164/comments | https://api.github.com/repos/huggingface/datasets/issues/1164/events | https://github.com/huggingface/datasets/pull/1164 | 757,716,575 | MDExOlB1bGxSZXF1ZXN0NTMzMDQyMjA1 | 1,164 | Add DaNe dataset | [] | closed | false | null | 1 | 2020-12-05T16:36:50Z | 2020-12-08T12:50:18Z | 2020-12-08T12:49:55Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1164/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1164/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1164.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1164",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1164.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1164"
} | true | [
"Thanks, this looks great!\r\n\r\nFor the code quality test, it looks like `flake8` is throwing the error, so you can tun `flake8 datasets` locally and fix the errors it points out until it passes"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.