url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2860/comments | https://api.github.com/repos/huggingface/datasets/issues/2860/events | https://github.com/huggingface/datasets/issues/2860 | 985,013,339 | MDU6SXNzdWU5ODUwMTMzMzk= | 2,860 | Cannot download TOTTO dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-09-01T11:04:10Z | 2021-09-02T06:47:40Z | 2021-09-02T06:47:40Z | null | Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip
`datasets version: 1.11.0`
# How to reproduce:
```py
from datasets import load_dataset
dataset = load_dataset('totto')
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2860/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2860/timeline | null | completed | null | null | false | [
"Hola @mrm8488, thanks for reporting.\r\n\r\nApparently, the data source host changed their URL one week ago: https://github.com/google-research-datasets/ToTTo/commit/cebeb430ec2a97747e704d16a9354f7d9073ff8f\r\n\r\nI'm fixing it."
] |
https://api.github.com/repos/huggingface/datasets/issues/5874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5874/comments | https://api.github.com/repos/huggingface/datasets/issues/5874/events | https://github.com/huggingface/datasets/issues/5874 | 1,715,708,930 | I_kwDODunzps5mQ6QC | 5,874 | Using as_dataset on a "parquet" builder | [] | closed | false | null | 1 | 2023-05-18T14:09:03Z | 2023-05-31T13:23:55Z | 2023-05-31T13:23:55Z | null | ### Describe the bug
I used a custom builder to ``download_and_prepare`` a dataset. The first (very minor) issue is that the doc seems to suggest ``download_and_prepare`` will return the dataset, while it does not ([builder.py](https://github.com/huggingface/datasets/blob/main/src/datasets/builder.py#L718-L738)).
```
>>> from datasets import load_dataset_builder
>>> builder = load_dataset_builder("rotten_tomatoes")
>>> ds = builder.download_and_prepare("./output_dir", file_format="parquet")
```
The main issue I am facing is loading the dataset from those parquet files. I used the `as_dataset` method suggested by the doc, however it returns:
`
FileNotFoundError: [Errno 2] Failed to open local file 'output_dir/__main__-train-00000-of-00245.arrow'. Detail:
[errno 2] No such file or directory.
`
### Steps to reproduce the bug
1. Create a custom builder of some sort: `builder = CustomBuilder()`.
2. Run `download_and_prepare` with the parquet format: `builder.download_and_prepare("./output_dir", file_format="parquet")`.
3. Run `dataset = builder.as_dataset()`.
### Expected behavior
I guess I'd expect `as_dataset` to generate the dataset in arrow format if it has to, or to suggest an alternative way to load the dataset (I've also tried other methods with `load_dataset` to no avail, probably due to misunderstandings on my part).
### Environment info
```
- `datasets` version: 2.12.0
- Platform: Linux-5.15.0-1027-gcp-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.14.1
- PyArrow version: 8.0.0
- Pandas version: 1.5.3
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5874/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5874/timeline | null | completed | null | null | false | [
"Hi! You can refer to [this doc](https://huggingface.co/docs/datasets/filesystems#load-and-save-your-datasets-using-your-cloud-storage-filesystem) to see the intended usage (basically, it skips the Arrow -> Parquet conversion step in `ds = load_dataset(...); ds.to_parquet(\"path/to/parquet\")`) and allows writing Parquet to remote storage unlike `to_parquet`).\r\n\r\n> I guess I'd expect as_dataset to generate the dataset in arrow format if it has to, or to suggest an alternative way to load the dataset (I've also tried other methods with load_dataset to no avail, probably due to misunderstandings on my part).\r\n\r\n`as_dataset` does not work with `file_format=\"parquet\"` files as Parquet files cannot be memory-mapped, so I think we should just raise an error in that case.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5841/comments | https://api.github.com/repos/huggingface/datasets/issues/5841/events | https://github.com/huggingface/datasets/issues/5841 | 1,705,286,639 | I_kwDODunzps5lpJvv | 5,841 | Abusurdly slow on iteration | [] | closed | false | null | 4 | 2023-05-11T08:04:09Z | 2023-05-15T15:38:13Z | 2023-05-15T15:38:13Z | null | ### Describe the bug
I am attempting to iterate through an image dataset, but I am encountering a significant slowdown in the iteration speed. In order to investigate this issue, I conducted the following experiment:
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
I noticed that the dataset in numpy format performs significantly faster than the one in torch format. My hypothesis is that the dataset undergoes a transformation process of torch->python->numpy(torch) in the background, which might be causing the slowdown. Is there any way to expedite the process by bypassing such transformations?
Furthermore, if I increase the size of a to an image shape, like:
```python
a=torch.randn(3,224,224)
```
the iteration speed becomes absurdly slow, around 100 iterations per second, whereas the speed with numpy format is approximately 250 iterations per second. This level of speed would be unacceptable for large image datasets, as it could take several hours just to iterate through a single epoch.
### Steps to reproduce the bug
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
### Expected behavior
iteration faster
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.10
- Python version: 3.8.16
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5841/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5841/timeline | null | completed | null | null | false | [
"Hi ! You can try to use the [Image](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Image) type which [decodes images on-the-fly](https://huggingface.co/docs/datasets/v2.12.0/en/about_dataset_features#image-feature) into pytorch tensors :)\r\n\r\n```python\r\nds = Dataset.from_dict({\"tensor\":a}).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 5.04 s, sys: 96.5 ms, total: 5.14 s\r\n# Wall time: 5.14 s\r\n# 10000\r\n```\r\n\r\n```python\r\nfeatures = Features({\"tensor\": Image()})\r\nds = Dataset.from_dict({\"tensor\":a}, features=features).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 1.86 s, sys: 49 ms, total: 1.91 s\r\n# Wall time: 1.9 s\r\n# 10000\r\n```\r\n\r\n-> Speed x2.7\r\n\r\nAnd if you want to keep using arrays of integers, consider using the [Array2D](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Array2D) or [Array3D](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Array3D) types which are even faster (since it doesn't decode images):\r\n\r\n```python\r\nfeatures = Features({\"tensor\": Array2D(shape=(100, 224), dtype=\"float32\")})\r\nds = Dataset.from_dict({\"tensor\":a}, features=features).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 828 ms, sys: 68.4 ms, total: 896 ms\r\n# Wall time: 897 ms\r\n# 10000\r\n```\r\n\r\n-> Speed x5.7\r\n\r\nBatching also speeds up a lot\r\n\r\n```python\r\nfrom torch.utils.data import DataLoader\r\ndl = DataLoader(ds, batch_size=100)\r\n%time sum(1 for _ in dl)\r\n# CPU times: user 564 ms, sys: 83.5 ms, total: 648 ms\r\n# Wall time: 579 ms\r\n# 100\r\n```\r\n\r\n-> Speed x8.9\r\n\r\n```python\r\n%time sum(1 for _ in ds.iter(batch_size=100))\r\n# CPU times: user 119 ms, sys: 96.8 ms, total: 215 ms\r\n# Wall time: 117 ms\r\n# 100\r\n```\r\n\r\n-> Speed x46",
"Anyway, regarding the speed difference between numpy and pytorch, I think the issue is that we first convert numpy sub-arrays to pytorch and then consolidate into one tensor, while we should to the opposite. Indeed converting a numpy array to pytorch has a fix cost that seems to cause a slow down. The current pipeline is\r\n\r\n```\r\narrow -> nested numpy arrays -> lists of torch tensors -> one torch tensor\r\n```\r\n\r\nand we should do\r\n\r\n```\r\narrow -> nested numpy arrays -> one numpy array -> one torch tensor\r\n```",
"I have a similar issue: iterating over a dataset takes 5s without applying any transform, but takes ~30s after applying a transform.\r\nHere is the minimum code to reproduce the problem\r\n\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset, DatasetDict, load_dataset, Array3D, Image, Features\r\nfrom torch.utils.data import DataLoader\r\nfrom tqdm import tqdm\r\nimport torchvision \r\nfrom torchvision.transforms import ToTensor, Normalize\r\n\r\n\r\n#################################\r\n# Without transform\r\n#################################\r\n \r\ntrain_dataset = load_dataset(\r\n 'cifar100',\r\n split='train',\r\n use_auth_token=True,\r\n)\r\n\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"img\", \"fine_label\"])\r\n\r\ntrain_loader= DataLoader(\r\n train_dataset,\r\n batch_size=100,\r\n pin_memory=False,\r\n shuffle=True,\r\n num_workers=8,\r\n)\r\n\r\nfor batch in tqdm(train_loader, desc=\"Loading data, no transform\"):\r\n pass\r\n\r\n\r\n#################################\r\n# With transform\r\n#################################\r\n\r\ntransform_func = torchvision.transforms.Compose([\r\n ToTensor(), \r\n Normalize(mean=[0.485, 0.456, 0.406], std= [0.229, 0.224, 0.225]),] \r\n)\r\n \r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"img\": transform_func(x[\"img\"])},\r\n)\r\n\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"img\", \"fine_label\"])\r\n\r\n\r\ntrain_loader= DataLoader(\r\n train_dataset,\r\n batch_size=100,\r\n pin_memory=False,\r\n shuffle=True,\r\n num_workers=8,\r\n)\r\n\r\n\r\nfor batch in tqdm(train_loader, desc=\"Loading data after transform\"):\r\n pass \r\n```\r\n\r\nI have also tried converting the Image column to an Array3D\r\n```python\r\nimg_shape = train_dataset[0][\"img\"].shape\r\n\r\nfeatures = train_dataset.features.copy()\r\nfeatures[\"x\"] = Array3D(shape=img_shape, dtype=\"float32\")\r\n\r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"x\": np.array(x[\"img\"], dtype=np.uint8)},\r\n features=features,\r\n)\r\ntrain_dataset.cast_column(\"x\", Array3D(shape=img_shape, dtype=\"float32\"))\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"x\", \"fine_label\"])\r\n```\r\nbut to no avail. Any clue?",
"Thanks! I convert my dataset feature to Array3D and this speed became awesome!"
] |
https://api.github.com/repos/huggingface/datasets/issues/5827 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5827/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5827/comments | https://api.github.com/repos/huggingface/datasets/issues/5827/events | https://github.com/huggingface/datasets/issues/5827 | 1,698,891,246 | I_kwDODunzps5lQwXu | 5,827 | load json dataset interrupt when dtype cast problem occured | [] | open | false | null | 1 | 2023-05-07T04:52:09Z | 2023-05-10T12:32:28Z | null | null | ### Describe the bug
i have a json like this:
[
{"id": 1, "name": 1},
{"id": 2, "name": "Nan"},
{"id": 3, "name": 3},
....
]
,which have several problematic rows data like row 2, then i load it with datasets.load_dataset('json', data_files=['xx.json'], split='train'), it will report like this:
Generating train split: 0 examples [00:00, ? examples/s]Failed to read file 'C:\Users\gawinjunwu\Downloads\test\data\a.json' with error <class 'pyarrow.lib.ArrowInvalid'>: Could not convert '2' with type str: tried to convert to int64
Traceback (most recent call last):
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 1858, in _prepare_split_single
for _, table in generator:
File "D:\Python3.9\lib\site-packages\datasets\packaged_modules\json\json.py", line 146, in _generate_tables
raise ValueError(f"Not able to read records in the JSON file at {file}.") from None
ValueError: Not able to read records in the JSON file at C:\Users\gawinjunwu\Downloads\test\data\a.json.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\gawinjunwu\Downloads\test\scripts\a.py", line 4, in <module>
ds = load_dataset('json', data_dir='data', split='train')
File "D:\Python3.9\lib\site-packages\datasets\load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 985, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 1746, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 1891, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset.
Could datasets skip those problematic data row?
### Steps to reproduce the bug
prepare a json file like this:
[
{"id": 1, "name": 1},
{"id": 2, "name": "Nan"},
{"id": 3, "name": 3}
]
then use datasets.load_dataset('json', dir_files=['xxx.json']) to load the json file
### Expected behavior
skip the problematic data row and load row1 and row3
### Environment info
python3.9 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5827/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5827/timeline | null | null | null | null | false | [
"Indeed the JSON dataset builder raises an error when it encounters an unexpected type.\r\n\r\nThere's an old PR open to add away to ignore such elements though, if it can help: https://github.com/huggingface/datasets/pull/2838"
] |
https://api.github.com/repos/huggingface/datasets/issues/5949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5949/comments | https://api.github.com/repos/huggingface/datasets/issues/5949/events | https://github.com/huggingface/datasets/pull/5949 | 1,754,843,717 | PR_kwDODunzps5S4oPC | 5,949 | Replace metadata utils with `huggingface_hub`'s RepoCard API | [] | closed | false | null | 8 | 2023-06-13T13:03:19Z | 2023-06-27T16:47:51Z | 2023-06-27T16:38:32Z | null | Use `huggingface_hub`'s RepoCard API instead of `DatasetMetadata` for modifying the card's YAML, and deprecate `datasets.utils.metadata` and `datasets.utils.readme`.
After removing these modules, we can also delete `datasets.utils.resources` since the moon landing repo now stores its own version of these resources for the metadata UI.
PS: this change requires bumping `huggingface_hub` to 0.13.0 (Transformers requires 0.14.0, so should be ok) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5949/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5949/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5949.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5949",
"merged_at": "2023-06-27T16:38:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5949.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5949"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006635 / 0.011353 (-0.004718) | 0.004439 / 0.011008 (-0.006570) | 0.107831 / 0.038508 (0.069323) | 0.035664 / 0.023109 (0.012555) | 0.393733 / 0.275898 (0.117835) | 0.418336 / 0.323480 (0.094856) | 0.005739 / 0.007986 (-0.002247) | 0.005737 / 0.004328 (0.001408) | 0.079820 / 0.004250 (0.075569) | 0.045402 / 0.037052 (0.008349) | 0.396108 / 0.258489 (0.137619) | 0.422951 / 0.293841 (0.129110) | 0.030506 / 0.128546 (-0.098040) | 0.009785 / 0.075646 (-0.065861) | 0.375302 / 0.419271 (-0.043969) | 0.054355 / 0.043533 (0.010823) | 0.399652 / 0.255139 (0.144513) | 0.410825 / 0.283200 (0.127625) | 0.109238 / 0.141683 (-0.032445) | 1.687532 / 1.452155 (0.235378) | 1.736829 / 1.492716 (0.244113) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226514 / 0.018006 (0.208508) | 0.487010 / 0.000490 (0.486520) | 0.006436 / 0.000200 (0.006236) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029097 / 0.037411 (-0.008315) | 0.122979 / 0.014526 (0.108453) | 0.129454 / 0.176557 (-0.047103) | 0.194006 / 0.737135 (-0.543129) | 0.137968 / 0.296338 (-0.158370) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.466425 / 0.215209 (0.251216) | 4.627307 / 2.077655 (2.549652) | 2.108840 / 1.504120 (0.604720) | 1.882547 / 1.541195 (0.341353) | 1.891077 / 1.468490 (0.422587) | 0.590646 / 4.584777 (-3.994131) | 4.176918 / 3.745712 (0.431205) | 2.071475 / 5.269862 (-3.198386) | 1.173815 / 4.565676 (-3.391862) | 0.075330 / 0.424275 (-0.348945) | 0.012944 / 0.007607 (0.005337) | 0.587080 / 0.226044 (0.361036) | 5.827053 / 2.268929 (3.558125) | 2.694258 / 55.444624 (-52.750366) | 2.276997 / 6.876477 (-4.599480) | 2.329678 / 2.142072 (0.187605) | 0.721860 / 4.805227 (-4.083367) | 0.159238 / 6.500664 (-6.341426) | 0.073013 / 0.075469 (-0.002456) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.345396 / 1.841788 (-0.496391) | 16.619283 / 8.074308 (8.544975) | 14.754754 / 10.191392 (4.563362) | 0.180784 / 0.680424 (-0.499639) | 0.020376 / 0.534201 (-0.513825) | 0.451010 / 0.579283 (-0.128273) | 0.481524 / 0.434364 (0.047160) | 0.564777 / 0.540337 (0.024440) | 0.683232 / 1.386936 (-0.703704) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007243 / 0.011353 (-0.004110) | 0.005262 / 0.011008 (-0.005746) | 0.084090 / 0.038508 (0.045581) | 0.037429 / 0.023109 (0.014320) | 0.404038 / 0.275898 (0.128140) | 0.445040 / 0.323480 (0.121560) | 0.006220 / 0.007986 (-0.001766) | 0.004256 / 0.004328 (-0.000072) | 0.083794 / 0.004250 (0.079544) | 0.052655 / 0.037052 (0.015603) | 0.414083 / 0.258489 (0.155594) | 0.458190 / 0.293841 (0.164349) | 0.032719 / 0.128546 (-0.095828) | 0.010063 / 0.075646 (-0.065583) | 0.092281 / 0.419271 (-0.326990) | 0.053888 / 0.043533 (0.010355) | 0.407813 / 0.255139 (0.152674) | 0.431692 / 0.283200 (0.148493) | 0.119799 / 0.141683 (-0.021884) | 1.709853 / 1.452155 (0.257698) | 1.771592 / 1.492716 (0.278876) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246540 / 0.018006 (0.228534) | 0.483199 / 0.000490 (0.482709) | 0.002514 / 0.000200 (0.002315) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031576 / 0.037411 (-0.005835) | 0.130020 / 0.014526 (0.115495) | 0.140285 / 0.176557 (-0.036272) | 0.196164 / 0.737135 (-0.540972) | 0.143924 / 0.296338 (-0.152414) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488549 / 0.215209 (0.273340) | 4.888055 / 2.077655 (2.810400) | 2.389163 / 1.504120 (0.885043) | 2.184626 / 1.541195 (0.643431) | 2.260227 / 1.468490 (0.791737) | 0.601331 / 4.584777 (-3.983446) | 4.386159 / 3.745712 (0.640447) | 3.345814 / 5.269862 (-1.924048) | 1.734360 / 4.565676 (-2.831317) | 0.073199 / 0.424275 (-0.351076) | 0.012397 / 0.007607 (0.004790) | 0.601411 / 0.226044 (0.375366) | 6.135000 / 2.268929 (3.866072) | 2.930169 / 55.444624 (-52.514456) | 2.532631 / 6.876477 (-4.343845) | 2.619351 / 2.142072 (0.477279) | 0.740954 / 4.805227 (-4.064274) | 0.162936 / 6.500664 (-6.337728) | 0.073885 / 0.075469 (-0.001585) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.502493 / 1.841788 (-0.339294) | 17.026756 / 8.074308 (8.952448) | 15.880958 / 10.191392 (5.689566) | 0.167261 / 0.680424 (-0.513163) | 0.020347 / 0.534201 (-0.513854) | 0.452902 / 0.579283 (-0.126381) | 0.481614 / 0.434364 (0.047250) | 0.539893 / 0.540337 (-0.000445) | 0.653401 / 1.386936 (-0.733535) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008268 / 0.011353 (-0.003084) | 0.005538 / 0.011008 (-0.005470) | 0.126136 / 0.038508 (0.087628) | 0.046100 / 0.023109 (0.022991) | 0.366882 / 0.275898 (0.090984) | 0.408912 / 0.323480 (0.085432) | 0.007090 / 0.007986 (-0.000895) | 0.004820 / 0.004328 (0.000491) | 0.091432 / 0.004250 (0.087181) | 0.058390 / 0.037052 (0.021338) | 0.368787 / 0.258489 (0.110298) | 0.419429 / 0.293841 (0.125588) | 0.034958 / 0.128546 (-0.093588) | 0.010526 / 0.075646 (-0.065120) | 0.463063 / 0.419271 (0.043791) | 0.070544 / 0.043533 (0.027011) | 0.366182 / 0.255139 (0.111043) | 0.390851 / 0.283200 (0.107652) | 0.128377 / 0.141683 (-0.013306) | 1.819385 / 1.452155 (0.367231) | 1.928834 / 1.492716 (0.436117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228413 / 0.018006 (0.210407) | 0.485511 / 0.000490 (0.485021) | 0.005395 / 0.000200 (0.005195) | 0.000119 / 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035209 / 0.037411 (-0.002203) | 0.144492 / 0.014526 (0.129967) | 0.150467 / 0.176557 (-0.026089) | 0.223861 / 0.737135 (-0.513274) | 0.156363 / 0.296338 (-0.139975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.517751 / 0.215209 (0.302542) | 5.150438 / 2.077655 (3.072783) | 2.483601 / 1.504120 (0.979481) | 2.279786 / 1.541195 (0.738592) | 2.374510 / 1.468490 (0.906020) | 0.637547 / 4.584777 (-3.947230) | 4.845393 / 3.745712 (1.099681) | 2.241554 / 5.269862 (-3.028307) | 1.290105 / 4.565676 (-3.275572) | 0.079791 / 0.424275 (-0.344484) | 0.014915 / 0.007607 (0.007308) | 0.640468 / 0.226044 (0.414423) | 6.394810 / 2.268929 (4.125881) | 3.012748 / 55.444624 (-52.431876) | 2.625565 / 6.876477 (-4.250912) | 2.792435 / 2.142072 (0.650363) | 0.782284 / 4.805227 (-4.022944) | 0.171628 / 6.500664 (-6.329036) | 0.081714 / 0.075469 (0.006245) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.592411 / 1.841788 (-0.249377) | 18.999604 / 8.074308 (10.925295) | 18.469946 / 10.191392 (8.278554) | 0.200878 / 0.680424 (-0.479546) | 0.021595 / 0.534201 (-0.512606) | 0.519247 / 0.579283 (-0.060036) | 0.534940 / 0.434364 (0.100576) | 0.656325 / 0.540337 (0.115987) | 0.789658 / 1.386936 (-0.597278) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008093 / 0.011353 (-0.003260) | 0.005524 / 0.011008 (-0.005484) | 0.092339 / 0.038508 (0.053831) | 0.045619 / 0.023109 (0.022510) | 0.449376 / 0.275898 (0.173478) | 0.478587 / 0.323480 (0.155107) | 0.006978 / 0.007986 (-0.001007) | 0.004622 / 0.004328 (0.000294) | 0.090618 / 0.004250 (0.086368) | 0.059321 / 0.037052 (0.022269) | 0.450989 / 0.258489 (0.192500) | 0.491652 / 0.293841 (0.197811) | 0.033308 / 0.128546 (-0.095238) | 0.010677 / 0.075646 (-0.064969) | 0.099836 / 0.419271 (-0.319435) | 0.055937 / 0.043533 (0.012404) | 0.440560 / 0.255139 (0.185421) | 0.475305 / 0.283200 (0.192105) | 0.130829 / 0.141683 (-0.010854) | 1.857943 / 1.452155 (0.405789) | 1.989534 / 1.492716 (0.496818) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244715 / 0.018006 (0.226709) | 0.482866 / 0.000490 (0.482377) | 0.001100 / 0.000200 (0.000900) | 0.000095 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036288 / 0.037411 (-0.001124) | 0.147903 / 0.014526 (0.133377) | 0.154141 / 0.176557 (-0.022416) | 0.221863 / 0.737135 (-0.515272) | 0.162319 / 0.296338 (-0.134019) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.536972 / 0.215209 (0.321763) | 5.382866 / 2.077655 (3.305211) | 2.719575 / 1.504120 (1.215456) | 2.516596 / 1.541195 (0.975401) | 2.699602 / 1.468490 (1.231112) | 0.639886 / 4.584777 (-3.944891) | 5.109746 / 3.745712 (1.364034) | 2.260206 / 5.269862 (-3.009656) | 1.305506 / 4.565676 (-3.260170) | 0.080262 / 0.424275 (-0.344013) | 0.014801 / 0.007607 (0.007194) | 0.661228 / 0.226044 (0.435184) | 6.596485 / 2.268929 (4.327557) | 3.226114 / 55.444624 (-52.218510) | 2.859776 / 6.876477 (-4.016701) | 3.059355 / 2.142072 (0.917282) | 0.793413 / 4.805227 (-4.011814) | 0.176521 / 6.500664 (-6.324143) | 0.084062 / 0.075469 (0.008593) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.642085 / 1.841788 (-0.199703) | 20.355459 / 8.074308 (12.281151) | 17.979620 / 10.191392 (7.788228) | 0.229329 / 0.680424 (-0.451094) | 0.025681 / 0.534201 (-0.508520) | 0.534142 / 0.579283 (-0.045141) | 0.623439 / 0.434364 (0.189075) | 0.621938 / 0.540337 (0.081601) | 0.759038 / 1.386936 (-0.627898) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007703 / 0.011353 (-0.003649) | 0.005362 / 0.011008 (-0.005646) | 0.113111 / 0.038508 (0.074602) | 0.038891 / 0.023109 (0.015782) | 0.348938 / 0.275898 (0.073040) | 0.398079 / 0.323480 (0.074599) | 0.006707 / 0.007986 (-0.001278) | 0.004489 / 0.004328 (0.000160) | 0.087194 / 0.004250 (0.082943) | 0.054268 / 0.037052 (0.017216) | 0.359949 / 0.258489 (0.101460) | 0.402959 / 0.293841 (0.109118) | 0.032508 / 0.128546 (-0.096038) | 0.010224 / 0.075646 (-0.065422) | 0.387007 / 0.419271 (-0.032264) | 0.058971 / 0.043533 (0.015439) | 0.345085 / 0.255139 (0.089946) | 0.384306 / 0.283200 (0.101107) | 0.122253 / 0.141683 (-0.019430) | 1.706353 / 1.452155 (0.254199) | 1.840780 / 1.492716 (0.348063) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254374 / 0.018006 (0.236368) | 0.497387 / 0.000490 (0.496897) | 0.012294 / 0.000200 (0.012094) | 0.000108 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030902 / 0.037411 (-0.006509) | 0.132098 / 0.014526 (0.117573) | 0.140311 / 0.176557 (-0.036245) | 0.205887 / 0.737135 (-0.531249) | 0.143992 / 0.296338 (-0.152347) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.467367 / 0.215209 (0.252158) | 4.669936 / 2.077655 (2.592281) | 2.155358 / 1.504120 (0.651238) | 1.984132 / 1.541195 (0.442937) | 2.102352 / 1.468490 (0.633861) | 0.607014 / 4.584777 (-3.977763) | 4.396479 / 3.745712 (0.650767) | 4.666056 / 5.269862 (-0.603806) | 2.176649 / 4.565676 (-2.389028) | 0.072657 / 0.424275 (-0.351619) | 0.012367 / 0.007607 (0.004759) | 0.569706 / 0.226044 (0.343661) | 5.749083 / 2.268929 (3.480154) | 2.640824 / 55.444624 (-52.803801) | 2.310253 / 6.876477 (-4.566224) | 2.486748 / 2.142072 (0.344676) | 0.737891 / 4.805227 (-4.067336) | 0.163507 / 6.500664 (-6.337157) | 0.075776 / 0.075469 (0.000307) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.362710 / 1.841788 (-0.479078) | 17.010705 / 8.074308 (8.936396) | 15.084231 / 10.191392 (4.892839) | 0.218274 / 0.680424 (-0.462150) | 0.019555 / 0.534201 (-0.514646) | 0.456013 / 0.579283 (-0.123270) | 0.502772 / 0.434364 (0.068408) | 0.581480 / 0.540337 (0.041142) | 0.686952 / 1.386936 (-0.699984) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007976 / 0.011353 (-0.003377) | 0.005141 / 0.011008 (-0.005868) | 0.086629 / 0.038508 (0.048121) | 0.039553 / 0.023109 (0.016444) | 0.433028 / 0.275898 (0.157130) | 0.463444 / 0.323480 (0.139964) | 0.006967 / 0.007986 (-0.001018) | 0.005814 / 0.004328 (0.001485) | 0.086266 / 0.004250 (0.082015) | 0.055384 / 0.037052 (0.018332) | 0.428733 / 0.258489 (0.170243) | 0.475670 / 0.293841 (0.181829) | 0.032872 / 0.128546 (-0.095674) | 0.010664 / 0.075646 (-0.064983) | 0.094357 / 0.419271 (-0.324915) | 0.058386 / 0.043533 (0.014854) | 0.431114 / 0.255139 (0.175975) | 0.441728 / 0.283200 (0.158528) | 0.131942 / 0.141683 (-0.009740) | 1.782214 / 1.452155 (0.330060) | 1.843185 / 1.492716 (0.350469) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.247047 / 0.018006 (0.229041) | 0.488931 / 0.000490 (0.488441) | 0.002657 / 0.000200 (0.002457) | 0.000106 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033893 / 0.037411 (-0.003518) | 0.131021 / 0.014526 (0.116495) | 0.142892 / 0.176557 (-0.033665) | 0.200955 / 0.737135 (-0.536180) | 0.151329 / 0.296338 (-0.145010) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.521138 / 0.215209 (0.305929) | 5.085207 / 2.077655 (3.007552) | 2.652901 / 1.504120 (1.148781) | 2.401545 / 1.541195 (0.860350) | 2.553461 / 1.468490 (1.084971) | 0.615347 / 4.584777 (-3.969430) | 4.448038 / 3.745712 (0.702326) | 2.049997 / 5.269862 (-3.219865) | 1.190602 / 4.565676 (-3.375075) | 0.073356 / 0.424275 (-0.350919) | 0.013685 / 0.007607 (0.006078) | 0.626705 / 0.226044 (0.400660) | 6.391941 / 2.268929 (4.123012) | 3.218864 / 55.444624 (-52.225760) | 2.858808 / 6.876477 (-4.017669) | 3.005808 / 2.142072 (0.863736) | 0.740725 / 4.805227 (-4.064502) | 0.161904 / 6.500664 (-6.338760) | 0.073727 / 0.075469 (-0.001742) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.488623 / 1.841788 (-0.353164) | 17.584367 / 8.074308 (9.510059) | 16.281818 / 10.191392 (6.090426) | 0.164482 / 0.680424 (-0.515942) | 0.020197 / 0.534201 (-0.514003) | 0.456750 / 0.579283 (-0.122533) | 0.501156 / 0.434364 (0.066792) | 0.549779 / 0.540337 (0.009442) | 0.650156 / 1.386936 (-0.736780) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008337 / 0.011353 (-0.003016) | 0.005911 / 0.011008 (-0.005097) | 0.129037 / 0.038508 (0.090529) | 0.046071 / 0.023109 (0.022962) | 0.418657 / 0.275898 (0.142759) | 0.490340 / 0.323480 (0.166860) | 0.006387 / 0.007986 (-0.001598) | 0.004724 / 0.004328 (0.000396) | 0.097953 / 0.004250 (0.093702) | 0.069025 / 0.037052 (0.031972) | 0.431178 / 0.258489 (0.172689) | 0.458363 / 0.293841 (0.164522) | 0.049341 / 0.128546 (-0.079205) | 0.014637 / 0.075646 (-0.061009) | 0.439800 / 0.419271 (0.020529) | 0.069905 / 0.043533 (0.026373) | 0.406775 / 0.255139 (0.151636) | 0.441989 / 0.283200 (0.158790) | 0.046009 / 0.141683 (-0.095674) | 1.847630 / 1.452155 (0.395475) | 1.904067 / 1.492716 (0.411351) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288305 / 0.018006 (0.270299) | 0.594547 / 0.000490 (0.594058) | 0.005600 / 0.000200 (0.005400) | 0.000106 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033847 / 0.037411 (-0.003564) | 0.125139 / 0.014526 (0.110613) | 0.147982 / 0.176557 (-0.028574) | 0.208396 / 0.737135 (-0.528739) | 0.144005 / 0.296338 (-0.152334) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.669175 / 0.215209 (0.453966) | 6.605289 / 2.077655 (4.527634) | 2.720468 / 1.504120 (1.216348) | 2.341355 / 1.541195 (0.800160) | 2.402069 / 1.468490 (0.933578) | 0.939303 / 4.584777 (-3.645474) | 5.718545 / 3.745712 (1.972833) | 2.856235 / 5.269862 (-2.413627) | 1.821555 / 4.565676 (-2.744121) | 0.105473 / 0.424275 (-0.318802) | 0.014490 / 0.007607 (0.006883) | 0.774349 / 0.226044 (0.548305) | 8.065048 / 2.268929 (5.796120) | 3.508482 / 55.444624 (-51.936143) | 2.822881 / 6.876477 (-4.053596) | 2.962947 / 2.142072 (0.820875) | 1.138944 / 4.805227 (-3.666284) | 0.248414 / 6.500664 (-6.252250) | 0.095665 / 0.075469 (0.020196) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.688231 / 1.841788 (-0.153557) | 18.673305 / 8.074308 (10.598997) | 22.768663 / 10.191392 (12.577271) | 0.211238 / 0.680424 (-0.469186) | 0.031380 / 0.534201 (-0.502821) | 0.517175 / 0.579283 (-0.062108) | 0.626437 / 0.434364 (0.192073) | 0.624225 / 0.540337 (0.083888) | 0.743746 / 1.386936 (-0.643191) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008888 / 0.011353 (-0.002464) | 0.005491 / 0.011008 (-0.005517) | 0.105013 / 0.038508 (0.066505) | 0.049456 / 0.023109 (0.026347) | 0.528989 / 0.275898 (0.253091) | 0.651871 / 0.323480 (0.328391) | 0.006683 / 0.007986 (-0.001302) | 0.004365 / 0.004328 (0.000037) | 0.098161 / 0.004250 (0.093911) | 0.075615 / 0.037052 (0.038563) | 0.543746 / 0.258489 (0.285257) | 0.650855 / 0.293841 (0.357014) | 0.050220 / 0.128546 (-0.078327) | 0.014471 / 0.075646 (-0.061175) | 0.115903 / 0.419271 (-0.303368) | 0.065925 / 0.043533 (0.022392) | 0.527797 / 0.255139 (0.272658) | 0.543834 / 0.283200 (0.260634) | 0.043005 / 0.141683 (-0.098678) | 1.842846 / 1.452155 (0.390691) | 1.970615 / 1.492716 (0.477899) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287350 / 0.018006 (0.269343) | 0.591139 / 0.000490 (0.590649) | 0.006423 / 0.000200 (0.006223) | 0.000107 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034594 / 0.037411 (-0.002818) | 0.137155 / 0.014526 (0.122629) | 0.154662 / 0.176557 (-0.021894) | 0.217834 / 0.737135 (-0.519301) | 0.159642 / 0.296338 (-0.136696) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.664288 / 0.215209 (0.449079) | 6.926912 / 2.077655 (4.849257) | 3.028957 / 1.504120 (1.524837) | 2.625178 / 1.541195 (1.083983) | 2.725316 / 1.468490 (1.256826) | 1.015715 / 4.584777 (-3.569062) | 5.834694 / 3.745712 (2.088982) | 5.105269 / 5.269862 (-0.164593) | 2.316194 / 4.565676 (-2.249483) | 0.113802 / 0.424275 (-0.310473) | 0.014079 / 0.007607 (0.006472) | 0.893727 / 0.226044 (0.667683) | 8.577701 / 2.268929 (6.308772) | 3.706907 / 55.444624 (-51.737717) | 3.087530 / 6.876477 (-3.788947) | 3.295004 / 2.142072 (1.152931) | 1.204172 / 4.805227 (-3.601055) | 0.248720 / 6.500664 (-6.251944) | 0.107208 / 0.075469 (0.031739) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.800058 / 1.841788 (-0.041730) | 19.253646 / 8.074308 (11.179338) | 22.590804 / 10.191392 (12.399412) | 0.270687 / 0.680424 (-0.409737) | 0.028678 / 0.534201 (-0.505522) | 0.534670 / 0.579283 (-0.044613) | 0.642881 / 0.434364 (0.208518) | 0.615521 / 0.540337 (0.075184) | 0.723733 / 1.386936 (-0.663203) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.017236 / 0.011353 (0.005883) | 0.005341 / 0.011008 (-0.005667) | 0.131471 / 0.038508 (0.092963) | 0.048868 / 0.023109 (0.025758) | 0.448942 / 0.275898 (0.173044) | 0.498721 / 0.323480 (0.175241) | 0.006825 / 0.007986 (-0.001161) | 0.004587 / 0.004328 (0.000259) | 0.104142 / 0.004250 (0.099891) | 0.075521 / 0.037052 (0.038469) | 0.439538 / 0.258489 (0.181049) | 0.498720 / 0.293841 (0.204879) | 0.051352 / 0.128546 (-0.077194) | 0.015070 / 0.075646 (-0.060576) | 0.441752 / 0.419271 (0.022480) | 0.089166 / 0.043533 (0.045633) | 0.428909 / 0.255139 (0.173770) | 0.446648 / 0.283200 (0.163448) | 0.042371 / 0.141683 (-0.099312) | 1.993948 / 1.452155 (0.541793) | 2.065756 / 1.492716 (0.573039) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257279 / 0.018006 (0.239273) | 0.575453 / 0.000490 (0.574964) | 0.004120 / 0.000200 (0.003920) | 0.000114 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034012 / 0.037411 (-0.003399) | 0.141737 / 0.014526 (0.127211) | 0.145241 / 0.176557 (-0.031316) | 0.226196 / 0.737135 (-0.510939) | 0.149526 / 0.296338 (-0.146813) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.665762 / 0.215209 (0.450553) | 6.683737 / 2.077655 (4.606083) | 2.869485 / 1.504120 (1.365365) | 2.462808 / 1.541195 (0.921613) | 2.526808 / 1.468490 (1.058318) | 0.957518 / 4.584777 (-3.627259) | 5.926261 / 3.745712 (2.180548) | 5.027822 / 5.269862 (-0.242040) | 2.643185 / 4.565676 (-1.922491) | 0.117014 / 0.424275 (-0.307261) | 0.015142 / 0.007607 (0.007535) | 0.835694 / 0.226044 (0.609650) | 8.427356 / 2.268929 (6.158427) | 3.649597 / 55.444624 (-51.795027) | 2.989607 / 6.876477 (-3.886870) | 3.043160 / 2.142072 (0.901088) | 1.158872 / 4.805227 (-3.646355) | 0.240456 / 6.500664 (-6.260208) | 0.089196 / 0.075469 (0.013726) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.689361 / 1.841788 (-0.152427) | 18.842158 / 8.074308 (10.767850) | 22.604249 / 10.191392 (12.412857) | 0.248487 / 0.680424 (-0.431936) | 0.029668 / 0.534201 (-0.504533) | 0.536283 / 0.579283 (-0.043001) | 0.663253 / 0.434364 (0.228890) | 0.622973 / 0.540337 (0.082635) | 0.735297 / 1.386936 (-0.651639) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009296 / 0.011353 (-0.002057) | 0.005955 / 0.011008 (-0.005053) | 0.105723 / 0.038508 (0.067215) | 0.051184 / 0.023109 (0.028074) | 0.527095 / 0.275898 (0.251197) | 0.631697 / 0.323480 (0.308217) | 0.006577 / 0.007986 (-0.001408) | 0.004452 / 0.004328 (0.000124) | 0.105921 / 0.004250 (0.101670) | 0.071951 / 0.037052 (0.034899) | 0.572518 / 0.258489 (0.314029) | 0.623957 / 0.293841 (0.330116) | 0.050861 / 0.128546 (-0.077686) | 0.014897 / 0.075646 (-0.060749) | 0.122013 / 0.419271 (-0.297258) | 0.067194 / 0.043533 (0.023661) | 0.530352 / 0.255139 (0.275213) | 0.563912 / 0.283200 (0.280712) | 0.034756 / 0.141683 (-0.106927) | 1.961580 / 1.452155 (0.509425) | 2.052412 / 1.492716 (0.559696) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.304996 / 0.018006 (0.286990) | 0.584899 / 0.000490 (0.584409) | 0.010444 / 0.000200 (0.010244) | 0.000134 / 0.000054 (0.000080) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032540 / 0.037411 (-0.004871) | 0.137349 / 0.014526 (0.122823) | 0.146233 / 0.176557 (-0.030323) | 0.206978 / 0.737135 (-0.530157) | 0.154380 / 0.296338 (-0.141959) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.705438 / 0.215209 (0.490229) | 7.042159 / 2.077655 (4.964504) | 3.285501 / 1.504120 (1.781381) | 2.904710 / 1.541195 (1.363515) | 2.952838 / 1.468490 (1.484348) | 0.987784 / 4.584777 (-3.596993) | 5.949550 / 3.745712 (2.203838) | 2.927148 / 5.269862 (-2.342714) | 1.870054 / 4.565676 (-2.695622) | 0.119548 / 0.424275 (-0.304727) | 0.014565 / 0.007607 (0.006958) | 0.858311 / 0.226044 (0.632266) | 8.721679 / 2.268929 (6.452750) | 4.100825 / 55.444624 (-51.343800) | 3.358093 / 6.876477 (-3.518383) | 3.499637 / 2.142072 (1.357564) | 1.208932 / 4.805227 (-3.596295) | 0.232961 / 6.500664 (-6.267703) | 0.089727 / 0.075469 (0.014258) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.780143 / 1.841788 (-0.061645) | 19.074991 / 8.074308 (11.000683) | 21.218487 / 10.191392 (11.027095) | 0.258690 / 0.680424 (-0.421734) | 0.029514 / 0.534201 (-0.504687) | 0.541764 / 0.579283 (-0.037519) | 0.640603 / 0.434364 (0.206239) | 0.635336 / 0.540337 (0.094999) | 0.756309 / 1.386936 (-0.630627) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009619 / 0.011353 (-0.001734) | 0.005683 / 0.011008 (-0.005325) | 0.136971 / 0.038508 (0.098463) | 0.051607 / 0.023109 (0.028497) | 0.439716 / 0.275898 (0.163818) | 0.486193 / 0.323480 (0.162713) | 0.006304 / 0.007986 (-0.001681) | 0.004489 / 0.004328 (0.000160) | 0.103837 / 0.004250 (0.099587) | 0.082954 / 0.037052 (0.045901) | 0.447286 / 0.258489 (0.188797) | 0.495434 / 0.293841 (0.201593) | 0.049244 / 0.128546 (-0.079302) | 0.015176 / 0.075646 (-0.060470) | 0.444406 / 0.419271 (0.025134) | 0.074766 / 0.043533 (0.031233) | 0.438585 / 0.255139 (0.183446) | 0.438232 / 0.283200 (0.155032) | 0.043372 / 0.141683 (-0.098311) | 2.057286 / 1.452155 (0.605131) | 2.049540 / 1.492716 (0.556824) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298038 / 0.018006 (0.280031) | 0.630771 / 0.000490 (0.630281) | 0.008287 / 0.000200 (0.008087) | 0.000123 / 0.000054 (0.000068) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033637 / 0.037411 (-0.003775) | 0.128327 / 0.014526 (0.113801) | 0.150672 / 0.176557 (-0.025885) | 0.228521 / 0.737135 (-0.508614) | 0.142733 / 0.296338 (-0.153606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.629072 / 0.215209 (0.413863) | 6.612047 / 2.077655 (4.534392) | 2.715594 / 1.504120 (1.211474) | 2.327823 / 1.541195 (0.786628) | 2.417508 / 1.468490 (0.949018) | 0.959134 / 4.584777 (-3.625643) | 5.669921 / 3.745712 (1.924209) | 2.977920 / 5.269862 (-2.291941) | 1.814564 / 4.565676 (-2.751112) | 0.120233 / 0.424275 (-0.304042) | 0.015859 / 0.007607 (0.008252) | 0.822618 / 0.226044 (0.596574) | 8.440306 / 2.268929 (6.171377) | 3.721611 / 55.444624 (-51.723013) | 2.954867 / 6.876477 (-3.921610) | 3.135364 / 2.142072 (0.993292) | 1.226475 / 4.805227 (-3.578752) | 0.246658 / 6.500664 (-6.254006) | 0.093920 / 0.075469 (0.018451) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.665631 / 1.841788 (-0.176157) | 19.136369 / 8.074308 (11.062061) | 23.659564 / 10.191392 (13.468172) | 0.273430 / 0.680424 (-0.406994) | 0.028180 / 0.534201 (-0.506021) | 0.559588 / 0.579283 (-0.019695) | 0.649203 / 0.434364 (0.214840) | 0.647113 / 0.540337 (0.106776) | 0.737978 / 1.386936 (-0.648958) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009104 / 0.011353 (-0.002249) | 0.006838 / 0.011008 (-0.004171) | 0.104516 / 0.038508 (0.066008) | 0.047986 / 0.023109 (0.024877) | 0.521849 / 0.275898 (0.245951) | 0.586281 / 0.323480 (0.262801) | 0.006225 / 0.007986 (-0.001760) | 0.005713 / 0.004328 (0.001384) | 0.111507 / 0.004250 (0.107257) | 0.072320 / 0.037052 (0.035267) | 0.551061 / 0.258489 (0.292572) | 0.628034 / 0.293841 (0.334193) | 0.055417 / 0.128546 (-0.073129) | 0.019613 / 0.075646 (-0.056034) | 0.123958 / 0.419271 (-0.295314) | 0.066132 / 0.043533 (0.022600) | 0.504461 / 0.255139 (0.249322) | 0.560428 / 0.283200 (0.277229) | 0.036098 / 0.141683 (-0.105585) | 1.927398 / 1.452155 (0.475243) | 2.015952 / 1.492716 (0.523235) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313065 / 0.018006 (0.295059) | 0.609174 / 0.000490 (0.608684) | 0.008755 / 0.000200 (0.008555) | 0.000120 / 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.040042 / 0.037411 (0.002630) | 0.136053 / 0.014526 (0.121527) | 0.143406 / 0.176557 (-0.033150) | 0.213080 / 0.737135 (-0.524055) | 0.154730 / 0.296338 (-0.141609) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.692706 / 0.215209 (0.477497) | 6.952968 / 2.077655 (4.875314) | 3.232023 / 1.504120 (1.727903) | 2.835450 / 1.541195 (1.294256) | 2.933821 / 1.468490 (1.465331) | 0.984712 / 4.584777 (-3.600065) | 6.127651 / 3.745712 (2.381939) | 2.956781 / 5.269862 (-2.313081) | 1.879928 / 4.565676 (-2.685748) | 0.111069 / 0.424275 (-0.313206) | 0.014598 / 0.007607 (0.006991) | 0.871486 / 0.226044 (0.645442) | 8.588500 / 2.268929 (6.319572) | 3.910740 / 55.444624 (-51.533885) | 3.115781 / 6.876477 (-3.760695) | 3.222367 / 2.142072 (1.080294) | 1.229680 / 4.805227 (-3.575547) | 0.232092 / 6.500664 (-6.268572) | 0.097717 / 0.075469 (0.022248) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.774193 / 1.841788 (-0.067595) | 19.863087 / 8.074308 (11.788779) | 24.058856 / 10.191392 (13.867464) | 0.214917 / 0.680424 (-0.465507) | 0.028771 / 0.534201 (-0.505430) | 0.544548 / 0.579283 (-0.034735) | 0.655882 / 0.434364 (0.221518) | 0.629110 / 0.540337 (0.088773) | 0.749246 / 1.386936 (-0.637690) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007075 / 0.011353 (-0.004278) | 0.005195 / 0.011008 (-0.005813) | 0.113043 / 0.038508 (0.074535) | 0.038442 / 0.023109 (0.015333) | 0.336310 / 0.275898 (0.060412) | 0.381888 / 0.323480 (0.058409) | 0.005990 / 0.007986 (-0.001996) | 0.003893 / 0.004328 (-0.000435) | 0.093123 / 0.004250 (0.088872) | 0.058449 / 0.037052 (0.021397) | 0.359463 / 0.258489 (0.100974) | 0.427485 / 0.293841 (0.133644) | 0.041454 / 0.128546 (-0.087092) | 0.013016 / 0.075646 (-0.062630) | 0.372849 / 0.419271 (-0.046422) | 0.059386 / 0.043533 (0.015853) | 0.381398 / 0.255139 (0.126259) | 0.367603 / 0.283200 (0.084403) | 0.033907 / 0.141683 (-0.107775) | 1.628903 / 1.452155 (0.176749) | 1.764131 / 1.492716 (0.271415) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298329 / 0.018006 (0.280322) | 0.593030 / 0.000490 (0.592540) | 0.007653 / 0.000200 (0.007453) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025445 / 0.037411 (-0.011966) | 0.112062 / 0.014526 (0.097536) | 0.119863 / 0.176557 (-0.056693) | 0.178389 / 0.737135 (-0.558746) | 0.129934 / 0.296338 (-0.166404) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.532834 / 0.215209 (0.317625) | 5.250908 / 2.077655 (3.173253) | 2.086920 / 1.504120 (0.582800) | 1.799745 / 1.541195 (0.258550) | 1.909648 / 1.468490 (0.441158) | 0.825382 / 4.584777 (-3.759395) | 5.268304 / 3.745712 (1.522592) | 2.533347 / 5.269862 (-2.736515) | 1.730187 / 4.565676 (-2.835490) | 0.099824 / 0.424275 (-0.324451) | 0.012969 / 0.007607 (0.005362) | 0.732234 / 0.226044 (0.506189) | 6.989066 / 2.268929 (4.720138) | 2.873486 / 55.444624 (-52.571138) | 2.274351 / 6.876477 (-4.602125) | 2.311060 / 2.142072 (0.168987) | 1.125366 / 4.805227 (-3.679861) | 0.214522 / 6.500664 (-6.286142) | 0.077579 / 0.075469 (0.002110) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.670950 / 1.841788 (-0.170838) | 18.131528 / 8.074308 (10.057220) | 21.277823 / 10.191392 (11.086431) | 0.238807 / 0.680424 (-0.441617) | 0.032251 / 0.534201 (-0.501950) | 0.503859 / 0.579283 (-0.075424) | 0.604825 / 0.434364 (0.170461) | 0.555623 / 0.540337 (0.015286) | 0.647301 / 1.386936 (-0.739635) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010857 / 0.011353 (-0.000496) | 0.005581 / 0.011008 (-0.005427) | 0.094346 / 0.038508 (0.055838) | 0.053084 / 0.023109 (0.029975) | 0.457586 / 0.275898 (0.181688) | 0.545475 / 0.323480 (0.221995) | 0.006761 / 0.007986 (-0.001225) | 0.005094 / 0.004328 (0.000765) | 0.095509 / 0.004250 (0.091258) | 0.077182 / 0.037052 (0.040130) | 0.498717 / 0.258489 (0.240228) | 0.542433 / 0.293841 (0.248592) | 0.051547 / 0.128546 (-0.076999) | 0.014633 / 0.075646 (-0.061014) | 0.106843 / 0.419271 (-0.312428) | 0.068459 / 0.043533 (0.024926) | 0.435793 / 0.255139 (0.180654) | 0.475484 / 0.283200 (0.192285) | 0.039495 / 0.141683 (-0.102188) | 1.684906 / 1.452155 (0.232751) | 1.798693 / 1.492716 (0.305976) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.279853 / 0.018006 (0.261847) | 0.601016 / 0.000490 (0.600526) | 0.002055 / 0.000200 (0.001855) | 0.000219 / 0.000054 (0.000165) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030935 / 0.037411 (-0.006477) | 0.121197 / 0.014526 (0.106671) | 0.143360 / 0.176557 (-0.033197) | 0.200862 / 0.737135 (-0.536274) | 0.138656 / 0.296338 (-0.157683) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.613904 / 0.215209 (0.398695) | 6.155422 / 2.077655 (4.077767) | 2.777238 / 1.504120 (1.273118) | 2.473045 / 1.541195 (0.931851) | 2.604470 / 1.468490 (1.135980) | 0.898871 / 4.584777 (-3.685906) | 5.739666 / 3.745712 (1.993954) | 4.719822 / 5.269862 (-0.550040) | 2.727354 / 4.565676 (-1.838322) | 0.108232 / 0.424275 (-0.316043) | 0.013632 / 0.007607 (0.006025) | 0.771802 / 0.226044 (0.545757) | 7.987466 / 2.268929 (5.718537) | 3.609856 / 55.444624 (-51.834768) | 2.974421 / 6.876477 (-3.902056) | 2.956567 / 2.142072 (0.814495) | 1.093792 / 4.805227 (-3.711435) | 0.213369 / 6.500664 (-6.287295) | 0.084486 / 0.075469 (0.009017) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.693855 / 1.841788 (-0.147933) | 18.055027 / 8.074308 (9.980719) | 21.397964 / 10.191392 (11.206571) | 0.240549 / 0.680424 (-0.439875) | 0.031212 / 0.534201 (-0.502989) | 0.513657 / 0.579283 (-0.065626) | 0.651348 / 0.434364 (0.216985) | 0.603740 / 0.540337 (0.063402) | 0.752287 / 1.386936 (-0.634649) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3684 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3684/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3684/comments | https://api.github.com/repos/huggingface/datasets/issues/3684/events | https://github.com/huggingface/datasets/pull/3684 | 1,125,133,664 | PR_kwDODunzps4yIOer | 3,684 | [fix]: iwslt2017 download urls | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 7 | 2022-02-06T07:56:55Z | 2022-09-22T16:20:19Z | 2022-09-22T16:20:18Z | null | Fixes #2076. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3684/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3684/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3684.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3684",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3684.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3684"
} | true | [
"Hi ! Thanks for the fix ! Do you know where this new URL comes from ?\r\n\r\nAlso we try to not use Google Drive if possible, since it has download quota limitations. Do you know if the data is available from another host than Google Drive ?",
"Oh, I found it just by following the link from the [IWSLT2017 homepage](https://wit3.fbk.eu/2017-01). Not sure if it's available from another host.",
"Ok cool ! I guess it's ok to use this URL for now, and we can see later if we need to change it.\r\n\r\nBefore merging, could you update the `dataset_infos.json` file by running this command please ?\r\n```\r\ndatasets-cli test ./datasets/iwslt2017 --save_infos --all_configs\r\n```",
"sure thing. lmk if there's anything else i can do to help.",
"just checking in. is there anything i can do to help on my end to get this merged? (the dummy data tests are failing due an incorrect path, i think)",
"Thanks ! I also fixed the dummy data :)\r\n\r\nTo fix the CI, feel free to merge the `master` branch into your PR.\r\n\r\nIf you have some time, feel free to also take a look at the missing YAML tags at the top of the README.md file of this dataset:\r\n```\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE missing 9 required tags: 'annotations_creators', 'language_creators', 'languages', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n```\r\nyou can use the dataset tagging app here: https://huggingface.co/spaces/huggingface/datasets-tagging",
"I guess this PR was superseded by this other:\r\n- #4481\r\n\r\nThanks for your contribution anyway, @msarmi9. "
] |
https://api.github.com/repos/huggingface/datasets/issues/3347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3347/comments | https://api.github.com/repos/huggingface/datasets/issues/3347/events | https://github.com/huggingface/datasets/pull/3347 | 1,067,738,902 | PR_kwDODunzps4vNthw | 3,347 | iter_archive for zip files | [] | closed | false | null | 1 | 2021-11-30T22:34:17Z | 2021-12-04T00:22:22Z | 2021-12-04T00:22:11Z | null | * In this PR, I added the option to iterate through zipfiles for `download_manager.py` only.
* Next PR will be the same applied to `streaming_download_manager.py`.
* Related issue #3272.
## Comments :
* There is no `.isreg()` equivalent in zipfile library to check if file is Regular so I used `.is_dir()` instead to skip directories.
* For now I got `streaming_download_manager.py` working for local zip files, but not for urls. I get the following error when I test it on an archive in google drive, so still working on it. `BlockSizeError: Got more bytes so far (>2112) than requested (22)`
## Tasks :
- [x] download_manager.py
- [ ] streaming_download_manager.py | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3347/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3347/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3347.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3347",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3347.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3347"
} | true | [
"And also don't always try streaming with Google Drive - it can have issues because of how Google Drive works (with quotas, restrictions, etc.) and it can indeed cause `BlockSizeError`.\r\n\r\nFeel free to host your test data elsewhere, such as in a dataset repository on https://huggingface.co (see [here](https://huggingface.co/docs/datasets/upload_dataset.html#upload-your-files) for a tutorial on how to upload files)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4584 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4584/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4584/comments | https://api.github.com/repos/huggingface/datasets/issues/4584/events | https://github.com/huggingface/datasets/pull/4584 | 1,286,911,993 | PR_kwDODunzps46eVF7 | 4,584 | Add binary classification task IDs | [] | closed | false | null | 4 | 2022-06-28T07:30:39Z | 2023-01-26T09:27:53Z | 2023-01-26T09:27:52Z | null | As a precursor to aligning the task IDs in `datasets` and AutoTrain, we need a way to distinguish binary vs multiclass vs multilabel classification.
This PR adds binary classification to the task IDs to enable this.
Related AutoTrain issue: https://github.com/huggingface/autonlp-backend/issues/597
cc @abhishekkrthakur @SBrandeis | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4584/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4584/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4584.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4584",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4584.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4584"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4584). All of your documentation changes will be reflected on that endpoint.",
"> Awesome thanks ! Can you add it to https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts first please ? This is where we define the cross libraries tasks taxonomy ;)\r\n\r\nThanks for the tip! Done in https://github.com/huggingface/hub-docs/pull/217",
"I don't think we need to update this file anymore. We should remove it IMO, and simply update the dataset [tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging)",
"I'm closing this PR."
] |
https://api.github.com/repos/huggingface/datasets/issues/1000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1000/comments | https://api.github.com/repos/huggingface/datasets/issues/1000/events | https://github.com/huggingface/datasets/pull/1000 | 755,292,066 | MDExOlB1bGxSZXF1ZXN0NTMxMDMxMTE1 | 1,000 | UM005: Urdu <> English Translation Dataset | [] | closed | false | null | 0 | 2020-12-02T13:51:35Z | 2020-12-04T15:34:30Z | 2020-12-04T15:34:29Z | null | Adds Urdu-English dataset for machine translation: http://ufal.ms.mff.cuni.cz/umc/005-en-ur/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1000/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1000/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1000.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1000",
"merged_at": "2020-12-04T15:34:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1000.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1000"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2389/comments | https://api.github.com/repos/huggingface/datasets/issues/2389/events | https://github.com/huggingface/datasets/pull/2389 | 897,822,270 | MDExOlB1bGxSZXF1ZXN0NjQ5Nzc3MDMz | 2,389 | Insert task templates for text classification | [] | closed | false | null | 6 | 2021-05-21T08:36:26Z | 2021-05-28T15:28:58Z | 2021-05-28T15:26:28Z | null | This PR inserts text-classification templates for datasets with the following properties:
* Only one config
* At most two features of `(Value, ClassLabel)` type
Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2389/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2389/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2389.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2389",
"merged_at": "2021-05-28T15:26:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2389.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2389"
} | true | [
"Update: found a few datasets that slipped through the net. Adding them shortly!",
"You might have thought about this already, but would it make sense to use the `datasets.features.ClassLabel` values when possible instead of declaring the list once for the `feature` and once for the `template`?",
"> You might have thought about this already, but would it make sense to use the `datasets.features.ClassLabel` values when possible instead of declaring the list once for the `feature` and once for the `template`?\r\n\r\nhi @yjernite, these code insertions are auto-generated so could certainly be improved :) \r\n\r\njust so i understand, your idea is that instead of doing something like\r\n\r\n```python\r\nclass AGNews(datasets.GeneratorBasedBuilder):\r\n \"\"\"AG News topic classification dataset.\"\"\"\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"text\": datasets.Value(\"string\"),\r\n \"label\": datasets.features.ClassLabel(\r\n names=[\"World\", \"Sports\", \"Business\", \"Sci/Tech\"]\r\n ),\r\n }\r\n ),\r\n homepage=\"http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html\",\r\n citation=_CITATION,\r\n task_templates=[\r\n TextClassification(\r\n labels=(\"Business\", \"Sci/Tech\", \"Sports\", \"World\"),\r\n text_column=\"text\",\r\n label_column=\"label\",\r\n )\r\n ],\r\n )\r\n```\r\n\r\nwe could do the following:\r\n\r\n```python\r\nclass AGNews(datasets.GeneratorBasedBuilder):\r\n \"\"\"AG News topic classification dataset.\"\"\"\r\n\r\n def _info(self):\r\n info = datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"text\": datasets.Value(\"string\"),\r\n \"label\": datasets.features.ClassLabel(\r\n names=[\"World\", \"Sports\", \"Business\", \"Sci/Tech\"]\r\n ),\r\n }\r\n ),\r\n homepage=\"http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html\",\r\n citation=_CITATION,\r\n )\r\n\r\n info.task_templates = [\r\n TextClassification(\r\n labels=info.features.names,\r\n text_column=\"text\",\r\n label_column=\"label\",\r\n )\r\n ]\r\n return info\r\n```\r\n\r\n",
"Or we could simply not specify the labels and update the template in the DatasetInfo postinit to give it the labels ?",
"> Or we could simply not specify the labels and update the template in the DatasetInfo postinit to give it the labels ?\r\n\r\nOh yes, that would be great! It does mean enforcing that people use the right feature type (sometimes people still use a `string` feature still because they don't want to enumerate the classes, but I guess you've been catching most of those in reviews @lhoestq )\r\n\r\nThere might be reasons where there should be a legitimate difference, but I can't really think of nay right now, and we can always duplicate the feature",
"Let's ignore the CI fails since they are unrelated to your changes. They're about dataset cards issues"
] |
https://api.github.com/repos/huggingface/datasets/issues/6013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6013/comments | https://api.github.com/repos/huggingface/datasets/issues/6013/events | https://github.com/huggingface/datasets/issues/6013 | 1,796,083,437 | I_kwDODunzps5rDg7t | 6,013 | [FR] `map` should reuse unchanged columns from the previous dataset to avoid disk usage | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] | open | false | null | 1 | 2023-07-10T06:42:20Z | 2023-07-10T15:37:52Z | null | null | ### Feature request
Currently adding a new column with `map` will cause all the data in the dataset to be duplicated and stored/cached on the disk again. It should reuse unchanged columns.
### Motivation
This allows having datasets with different columns but sharing some basic columns. Currently, these datasets would become too expensive to store and one would need some kind of on-the-fly join; which also doesn't seem implemented.
### Your contribution
_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6013/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6013/timeline | null | null | null | null | false | [
"You can use the `remove_columns` parameter in `map` to avoid duplicating the columns (and save disk space) and then concatenate the original dataset with the map result:\r\n```python\r\nfrom datasets import concatenate_datasets\r\n# dummy example\r\nds_new = ds.map(lambda x: {\"new_col\": x[\"col\"] + 2}, remove_columns=ds.column_names)\r\nds_combined = concatenate_datasets([ds, ds_new], axis=1)\r\n```\r\n\r\nDoing this automatically is hard to implement efficiently unless we know ahead of time which existing columns will be modified by a `map` transform. We have this info when `input_columns` are specified, so I think this is the only case we can optimize."
] |
https://api.github.com/repos/huggingface/datasets/issues/5954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5954/comments | https://api.github.com/repos/huggingface/datasets/issues/5954/events | https://github.com/huggingface/datasets/pull/5954 | 1,756,572,994 | PR_kwDODunzps5S-hSP | 5,954 | Better filenotfound for gated | [] | closed | false | null | 3 | 2023-06-14T10:33:10Z | 2023-06-14T12:33:27Z | 2023-06-14T12:26:31Z | null | close https://github.com/huggingface/datasets/issues/5953
<img width="1292" alt="image" src="https://github.com/huggingface/datasets/assets/42851186/270fe5bc-1739-4878-b7bc-ab6d35336d4d">
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5954/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5954/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5954.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5954",
"merged_at": "2023-06-14T12:26:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5954.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5954"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006374 / 0.011353 (-0.004979) | 0.004100 / 0.011008 (-0.006909) | 0.104031 / 0.038508 (0.065523) | 0.035186 / 0.023109 (0.012076) | 0.328904 / 0.275898 (0.053006) | 0.361409 / 0.323480 (0.037929) | 0.003855 / 0.007986 (-0.004130) | 0.004140 / 0.004328 (-0.000189) | 0.080406 / 0.004250 (0.076156) | 0.045658 / 0.037052 (0.008606) | 0.341133 / 0.258489 (0.082644) | 0.372688 / 0.293841 (0.078847) | 0.032025 / 0.128546 (-0.096521) | 0.008877 / 0.075646 (-0.066769) | 0.354784 / 0.419271 (-0.064488) | 0.068874 / 0.043533 (0.025341) | 0.335441 / 0.255139 (0.080302) | 0.356498 / 0.283200 (0.073298) | 0.113367 / 0.141683 (-0.028316) | 1.522458 / 1.452155 (0.070304) | 1.608046 / 1.492716 (0.115329) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231653 / 0.018006 (0.213647) | 0.446678 / 0.000490 (0.446188) | 0.003246 / 0.000200 (0.003046) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025299 / 0.037411 (-0.012112) | 0.111440 / 0.014526 (0.096914) | 0.118758 / 0.176557 (-0.057799) | 0.175037 / 0.737135 (-0.562098) | 0.124583 / 0.296338 (-0.171755) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418694 / 0.215209 (0.203484) | 4.174695 / 2.077655 (2.097041) | 1.890323 / 1.504120 (0.386203) | 1.683300 / 1.541195 (0.142106) | 1.781954 / 1.468490 (0.313464) | 0.546131 / 4.584777 (-4.038645) | 3.768055 / 3.745712 (0.022343) | 1.839878 / 5.269862 (-3.429983) | 1.111877 / 4.565676 (-3.453800) | 0.068568 / 0.424275 (-0.355707) | 0.011950 / 0.007607 (0.004343) | 0.527469 / 0.226044 (0.301425) | 5.274887 / 2.268929 (3.005958) | 2.391274 / 55.444624 (-53.053351) | 2.063837 / 6.876477 (-4.812640) | 2.140627 / 2.142072 (-0.001445) | 0.681508 / 4.805227 (-4.123719) | 0.148203 / 6.500664 (-6.352461) | 0.064456 / 0.075469 (-0.011013) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221478 / 1.841788 (-0.620310) | 14.713705 / 8.074308 (6.639397) | 14.674184 / 10.191392 (4.482792) | 0.148411 / 0.680424 (-0.532012) | 0.017858 / 0.534201 (-0.516343) | 0.436166 / 0.579283 (-0.143117) | 0.437290 / 0.434364 (0.002926) | 0.521994 / 0.540337 (-0.018343) | 0.635488 / 1.386936 (-0.751448) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006108 / 0.011353 (-0.005245) | 0.003888 / 0.011008 (-0.007120) | 0.078424 / 0.038508 (0.039916) | 0.033618 / 0.023109 (0.010509) | 0.376284 / 0.275898 (0.100386) | 0.396957 / 0.323480 (0.073477) | 0.003799 / 0.007986 (-0.004187) | 0.003160 / 0.004328 (-0.001168) | 0.078358 / 0.004250 (0.074107) | 0.045597 / 0.037052 (0.008545) | 0.386396 / 0.258489 (0.127907) | 0.412985 / 0.293841 (0.119144) | 0.031610 / 0.128546 (-0.096936) | 0.008720 / 0.075646 (-0.066926) | 0.085944 / 0.419271 (-0.333328) | 0.050780 / 0.043533 (0.007247) | 0.378099 / 0.255139 (0.122960) | 0.381894 / 0.283200 (0.098694) | 0.098926 / 0.141683 (-0.042756) | 1.513842 / 1.452155 (0.061688) | 1.595040 / 1.492716 (0.102323) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208169 / 0.018006 (0.190163) | 0.431653 / 0.000490 (0.431163) | 0.000935 / 0.000200 (0.000735) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029600 / 0.037411 (-0.007812) | 0.116936 / 0.014526 (0.102410) | 0.125603 / 0.176557 (-0.050953) | 0.177007 / 0.737135 (-0.560129) | 0.130602 / 0.296338 (-0.165736) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457158 / 0.215209 (0.241949) | 4.563254 / 2.077655 (2.485599) | 2.303549 / 1.504120 (0.799429) | 2.107269 / 1.541195 (0.566074) | 2.130861 / 1.468490 (0.662371) | 0.548931 / 4.584777 (-4.035846) | 3.745578 / 3.745712 (-0.000134) | 1.820372 / 5.269862 (-3.449490) | 1.099316 / 4.565676 (-3.466361) | 0.068218 / 0.424275 (-0.356057) | 0.012336 / 0.007607 (0.004728) | 0.569721 / 0.226044 (0.343676) | 5.691312 / 2.268929 (3.422384) | 2.797483 / 55.444624 (-52.647141) | 2.422621 / 6.876477 (-4.453855) | 2.426187 / 2.142072 (0.284115) | 0.674777 / 4.805227 (-4.130451) | 0.144855 / 6.500664 (-6.355809) | 0.065805 / 0.075469 (-0.009664) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.305078 / 1.841788 (-0.536709) | 14.874315 / 8.074308 (6.800007) | 14.541301 / 10.191392 (4.349909) | 0.175818 / 0.680424 (-0.504606) | 0.018169 / 0.534201 (-0.516032) | 0.435836 / 0.579283 (-0.143447) | 0.458397 / 0.434364 (0.024033) | 0.506232 / 0.540337 (-0.034106) | 0.605306 / 1.386936 (-0.781630) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006138 / 0.011353 (-0.005215) | 0.003792 / 0.011008 (-0.007216) | 0.099417 / 0.038508 (0.060908) | 0.028739 / 0.023109 (0.005630) | 0.302835 / 0.275898 (0.026937) | 0.336397 / 0.323480 (0.012918) | 0.003537 / 0.007986 (-0.004449) | 0.002973 / 0.004328 (-0.001355) | 0.077461 / 0.004250 (0.073211) | 0.039493 / 0.037052 (0.002440) | 0.302367 / 0.258489 (0.043878) | 0.344936 / 0.293841 (0.051095) | 0.027813 / 0.128546 (-0.100733) | 0.008591 / 0.075646 (-0.067055) | 0.318975 / 0.419271 (-0.100297) | 0.045971 / 0.043533 (0.002438) | 0.301672 / 0.255139 (0.046533) | 0.328202 / 0.283200 (0.045003) | 0.091400 / 0.141683 (-0.050282) | 1.487215 / 1.452155 (0.035060) | 1.557730 / 1.492716 (0.065014) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208343 / 0.018006 (0.190336) | 0.426764 / 0.000490 (0.426275) | 0.001196 / 0.000200 (0.000996) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024332 / 0.037411 (-0.013079) | 0.101861 / 0.014526 (0.087335) | 0.108669 / 0.176557 (-0.067888) | 0.172042 / 0.737135 (-0.565093) | 0.113048 / 0.296338 (-0.183290) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421419 / 0.215209 (0.206210) | 4.200816 / 2.077655 (2.123162) | 1.913516 / 1.504120 (0.409396) | 1.712167 / 1.541195 (0.170972) | 1.762129 / 1.468490 (0.293639) | 0.561616 / 4.584777 (-4.023161) | 3.398122 / 3.745712 (-0.347590) | 1.744323 / 5.269862 (-3.525538) | 1.036023 / 4.565676 (-3.529653) | 0.067658 / 0.424275 (-0.356617) | 0.011145 / 0.007607 (0.003538) | 0.522803 / 0.226044 (0.296759) | 5.226245 / 2.268929 (2.957317) | 2.355148 / 55.444624 (-53.089476) | 2.014939 / 6.876477 (-4.861538) | 2.140028 / 2.142072 (-0.002044) | 0.695049 / 4.805227 (-4.110178) | 0.138428 / 6.500664 (-6.362236) | 0.066721 / 0.075469 (-0.008748) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.219610 / 1.841788 (-0.622177) | 14.239576 / 8.074308 (6.165268) | 14.381955 / 10.191392 (4.190563) | 0.131208 / 0.680424 (-0.549216) | 0.016698 / 0.534201 (-0.517503) | 0.361373 / 0.579283 (-0.217910) | 0.382560 / 0.434364 (-0.051804) | 0.419427 / 0.540337 (-0.120911) | 0.508314 / 1.386936 (-0.878622) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006174 / 0.011353 (-0.005179) | 0.003893 / 0.011008 (-0.007115) | 0.079614 / 0.038508 (0.041106) | 0.028685 / 0.023109 (0.005576) | 0.368627 / 0.275898 (0.092729) | 0.411599 / 0.323480 (0.088119) | 0.003573 / 0.007986 (-0.004413) | 0.002989 / 0.004328 (-0.001340) | 0.078653 / 0.004250 (0.074402) | 0.041146 / 0.037052 (0.004094) | 0.362387 / 0.258489 (0.103898) | 0.417234 / 0.293841 (0.123393) | 0.027958 / 0.128546 (-0.100589) | 0.008695 / 0.075646 (-0.066952) | 0.084637 / 0.419271 (-0.334635) | 0.044188 / 0.043533 (0.000655) | 0.358514 / 0.255139 (0.103375) | 0.392314 / 0.283200 (0.109114) | 0.093986 / 0.141683 (-0.047697) | 1.535366 / 1.452155 (0.083212) | 1.605978 / 1.492716 (0.113262) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196215 / 0.018006 (0.178209) | 0.429403 / 0.000490 (0.428913) | 0.003736 / 0.000200 (0.003536) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025281 / 0.037411 (-0.012130) | 0.104325 / 0.014526 (0.089799) | 0.111548 / 0.176557 (-0.065009) | 0.162326 / 0.737135 (-0.574809) | 0.113853 / 0.296338 (-0.182486) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.447600 / 0.215209 (0.232391) | 4.463422 / 2.077655 (2.385767) | 2.168028 / 1.504120 (0.663908) | 1.968699 / 1.541195 (0.427504) | 2.035531 / 1.468490 (0.567041) | 0.564575 / 4.584777 (-4.020202) | 3.435338 / 3.745712 (-0.310374) | 2.981930 / 5.269862 (-2.287932) | 1.492172 / 4.565676 (-3.073505) | 0.067981 / 0.424275 (-0.356294) | 0.011254 / 0.007607 (0.003647) | 0.544385 / 0.226044 (0.318340) | 5.441694 / 2.268929 (3.172765) | 2.650168 / 55.444624 (-52.794456) | 2.333974 / 6.876477 (-4.542503) | 2.383424 / 2.142072 (0.241351) | 0.669814 / 4.805227 (-4.135414) | 0.135456 / 6.500664 (-6.365209) | 0.067067 / 0.075469 (-0.008402) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.313275 / 1.841788 (-0.528513) | 14.527636 / 8.074308 (6.453328) | 14.470957 / 10.191392 (4.279565) | 0.144361 / 0.680424 (-0.536063) | 0.016847 / 0.534201 (-0.517354) | 0.365158 / 0.579283 (-0.214125) | 0.393809 / 0.434364 (-0.040555) | 0.428527 / 0.540337 (-0.111810) | 0.515816 / 1.386936 (-0.871120) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5049/comments | https://api.github.com/repos/huggingface/datasets/issues/5049/events | https://github.com/huggingface/datasets/pull/5049 | 1,392,361,381 | PR_kwDODunzps4_7zOY | 5,049 | Add `kwargs` to `Dataset.from_generator` | [] | closed | false | null | 1 | 2022-09-30T12:24:27Z | 2022-10-03T11:00:11Z | 2022-10-03T10:58:15Z | null | Add the `kwargs` param to `from_generator` to align it with the rest of the `from_` methods (this param allows passing custom `writer_batch_size` for instance). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5049/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5049/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5049.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5049",
"merged_at": "2022-10-03T10:58:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5049.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5049"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1383/comments | https://api.github.com/repos/huggingface/datasets/issues/1383/events | https://github.com/huggingface/datasets/pull/1383 | 760,331,480 | MDExOlB1bGxSZXF1ZXN0NTM1MTgxMDQ2 | 1,383 | added conv ai 2 | [] | closed | false | null | 2 | 2020-12-09T13:30:12Z | 2020-12-13T18:54:42Z | 2020-12-13T18:54:41Z | null | Dataset : https://github.com/DeepPavlov/convai/tree/master/2018 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1383/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1383/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1383.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1383",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1383.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1383"
} | true | [
"@lhoestq Thank you for the suggestions. I added the changes to the branch and seems after rebasing it to master, all the commits previous commits got added. Should I create a new PR or should I keep this one only ? ",
"closing this one in favor of #1527 "
] |
https://api.github.com/repos/huggingface/datasets/issues/734 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/734/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/734/comments | https://api.github.com/repos/huggingface/datasets/issues/734/events | https://github.com/huggingface/datasets/pull/734 | 721,767,848 | MDExOlB1bGxSZXF1ZXN0NTAzNjMwMDcz | 734 | Fix GLUE metric description | [] | closed | false | null | 0 | 2020-10-14T20:44:14Z | 2020-10-15T09:27:43Z | 2020-10-15T09:27:42Z | null | Small typo: the description says translation instead of prediction. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/734/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/734/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/734.diff",
"html_url": "https://github.com/huggingface/datasets/pull/734",
"merged_at": "2020-10-15T09:27:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/734.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/734"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2764 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2764/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2764/comments | https://api.github.com/repos/huggingface/datasets/issues/2764/events | https://github.com/huggingface/datasets/pull/2764 | 962,554,799 | MDExOlB1bGxSZXF1ZXN0NzA1MzI3MDQ5 | 2,764 | Add DER metric for SUPERB speaker diarization task | [
{
"color": "E3165C",
"default": false,
"description": "",
"id": 4190228726,
"name": "transfer-to-evaluate",
"node_id": "LA_kwDODunzps75wdD2",
"url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate"
}
] | closed | false | null | 1 | 2021-08-06T09:12:36Z | 2023-07-11T09:35:23Z | 2023-07-11T09:35:23Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2764/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2764/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/2764.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2764",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2764.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2764"
} | true | [
"Metrics are deprecated in `datasets` and `evaluate` should be used instead: https://github.com/huggingface/evaluate"
] |
https://api.github.com/repos/huggingface/datasets/issues/2072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2072/comments | https://api.github.com/repos/huggingface/datasets/issues/2072/events | https://github.com/huggingface/datasets/pull/2072 | 834,054,837 | MDExOlB1bGxSZXF1ZXN0NTk0OTQ5NjA4 | 2,072 | Fix docstring issues | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 2 | 2021-03-17T18:13:44Z | 2021-03-24T08:20:57Z | 2021-03-18T12:41:21Z | null | Fix docstring issues. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2072/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2072/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2072.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2072",
"merged_at": "2021-03-18T12:41:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2072.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2072"
} | true | [
"I think I will stop pushing to this PR, so that it can me merged for today release. \r\n\r\nI will open another PR for further fixing docs.\r\n\r\nDo you agree, @lhoestq ?",
"Sounds good thanks !"
] |
https://api.github.com/repos/huggingface/datasets/issues/3501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3501/comments | https://api.github.com/repos/huggingface/datasets/issues/3501/events | https://github.com/huggingface/datasets/pull/3501 | 1,090,413,758 | PR_kwDODunzps4wXM8H | 3,501 | Update pib dataset card | [] | closed | false | null | 0 | 2021-12-29T10:14:40Z | 2021-12-29T11:13:21Z | 2021-12-29T11:13:21Z | null | Related to #3496 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3501/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3501/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3501.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3501",
"merged_at": "2021-12-29T11:13:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3501.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3501"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2357 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2357/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2357/comments | https://api.github.com/repos/huggingface/datasets/issues/2357/events | https://github.com/huggingface/datasets/pull/2357 | 890,595,693 | MDExOlB1bGxSZXF1ZXN0NjQzNTk0NDcz | 2,357 | Adding Microsoft CodeXGlue Datasets | [] | closed | false | null | 16 | 2021-05-13T00:43:01Z | 2021-06-08T09:29:57Z | 2021-06-08T09:29:57Z | null | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2357/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2357/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2357.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2357",
"merged_at": "2021-06-08T09:29:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2357.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2357"
} | true | [
"Oh one other thing. Mentioned in the PR was that I would need to regenerate the dataset_infos.json once the camel casing was done. However, I am unsure why this is the case since there is no reference to any object names in the dataset_infos.json file.\r\n\r\nIf it needs to be reran, I can try it do it on my own machine, but I've had a memory issues with a previous dataset due to my compute constraints so I'd prefer to hopefully avoid it all together if not necessary to regenerate.",
"Was just reviewing the `builder_name`s of each dataset and it seems like it is already following this format:\r\n\r\n`CodeXGlueCcCloneDetectionBigCloneBenchMain -> code_x_glue_cc_clone_detection_big_clone_bench_main` Is there a location I am missing?",
"> Was just reviewing the `builder_name`s of each dataset and it seems like it is already following this format:\r\n> \r\n> `CodeXGlueCcCloneDetectionBigCloneBenchMain -> code_x_glue_cc_clone_detection_big_clone_bench_main` Is there a location I am missing?\r\n\r\nIf it's already in this format then it's fine thanks ! It's all good then\r\n\r\nTo fix the CI you just need to add the `encoding=` parameters to the `open()` calls",
"@lhoestq I think everything should be good to go besides the code styling, which seem to be due to missing or unsupported metadata tags for the READMEs, is this something I should worry about since all the other datasets seem to be failing as well?",
"Awesome! Just committed your changes and I will begin on adding the TOCs and filling in the content for the new sections/subsections.\r\n\r\nAlso, I see that we are having to only use the `code` tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search.",
"> Also, I see that we are having to only use the code tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search.\r\n\r\nYes I agree. We'll be able to reuse the tags per programming language from this PR when we allow this feature\r\n\r\ncc @yjernite what do you think about extending our languages taxonomy to programming languages ?",
"Hey @lhoestq, just finalizing the READMEs and testing them against the automated test. For the non, WIN tests, it seems like there is some dependency issue that doesn't have to do with the new datasets. For the WIN tests, it looks like some of the headings are mislabeled such as \"Supported Tasks and Leaderboards\" -> \"Supported Tasks\" in the TOC you posted. Should I base my TOC on the one you posted or on the one that the test script is using? Also, it throws errors for some of the fields being empty, such as \"Source Data\" in the `code_x_glue_tt_text_to_text` dataset. However, I am not familiar with this dataset, so I put the `[More Information Needed]` stub, similar to the other sections I couldn't easily answer. For some of the sections like \"Source Data\", is this info required?",
"Yes you're right, it is `Supported Tasks and Leaderboards` that we need to use, sorry about that\r\n\r\nI also noticed the same for the splits section: we have to use `Data Splits` (not Data Splits Sample Size)\r\n",
"Some subsections are also missing: `Initial Data Collection and Normalization`, `Who are the source language producers?`.\r\nIf you are interested you can fill those sections as well, or leave them empty for now.\r\nThis will also fix the error regarding \"Source Data\"\r\n\r\nYou can see the template of the readme here:\r\nhttps://github.com/huggingface/datasets/blob/9d8bf36fdb861d9b2922d7c782fb58f9f542997c/templates/README.md",
"> > Also, I see that we are having to only use the code tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search.\r\n> \r\n> Yes I agree. We'll be able to reuse the tags per programming language from this PR when we allow this feature\r\n> \r\n> cc @yjernite what do you think about extending our languages taxonomy to programming languages ?\r\n\r\nSounds good, as long as they all share a prefix! maybe `code_cpp`, `code_java`, etc. ? \r\n\r\nI don't think we currently have `_` in language codes/names, but also don't see what it would break *a priori*",
"We don't use `_` but there are some languages that use `-` though like `en-US`. Let's use `-` maybe, to match the same hierarchy pattern ?",
"Hi guys, I just started working on https://github.com/huggingface/datasets/pull/997 this morning and I just realized that you were finishing it... You may want to get the dataset cards from https://github.com/madlag/datasets, and maybe some code too, as I did a few things like moving _CITATION and _DESCRIPTION to globals.\r\n\r\n",
"I am renaming the main classes to match the dataset names, for example : CodeXGlueTcTextToCodeMain -> CodeXGlueTcTextToCode . And I am regenerating the dataset_infos.json accordingly.",
"Thanks for renaming the classes and updating the dataset_infos.json ! This looks all clean now :)\r\n\r\nThis PR looks all good to me :) One just needs to merge master into this branch to make sure the CI is green with the latest changes. It should also fix the current CI issues that are not related to this PR",
"Woot woot :rocket:! All green, looks like it is ready for showtime. Thank you both @lhoestq and especially @madlag, I think these datasets are going to be a great new addition to :hugs: datasets and I can't wait to use them in my research :nerd_face:.",
"Thanks @ncoop57 for you contribution! It will be really cool to see those datasets used as soon as they are released !"
] |
https://api.github.com/repos/huggingface/datasets/issues/572 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/572/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/572/comments | https://api.github.com/repos/huggingface/datasets/issues/572/events | https://github.com/huggingface/datasets/pull/572 | 692,598,231 | MDExOlB1bGxSZXF1ZXN0NDc5MTgyNDU3 | 572 | Add CLUE Benchmark (11 datasets) | [] | closed | false | null | 3 | 2020-09-04T01:57:40Z | 2020-09-07T09:59:11Z | 2020-09-07T09:59:10Z | null | Add 11 tasks of [CLUE](https://github.com/CLUEbenchmark/CLUE). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/572/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/572/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/572.diff",
"html_url": "https://github.com/huggingface/datasets/pull/572",
"merged_at": "2020-09-07T09:59:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/572.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/572"
} | true | [
"Thanks, @lhoestq! I've addressed the comments. \r\nAlso, I have tried to use `ClassLabel` [when possible](https://github.com/huggingface/nlp/pull/572/files#diff-1026ac7d7b78bf029cb0ebe63162c77dR297). Is there still somewhere else we can use `ClassLabel`? ",
"I believe CI failure is unrelated.",
"Great job! "
] |
https://api.github.com/repos/huggingface/datasets/issues/2573 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2573/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2573/comments | https://api.github.com/repos/huggingface/datasets/issues/2573/events | https://github.com/huggingface/datasets/issues/2573 | 934,584,745 | MDU6SXNzdWU5MzQ1ODQ3NDU= | 2,573 | Finding right block-size with JSON loading difficult for user | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 1 | 2021-07-01T08:48:35Z | 2021-07-01T19:10:53Z | null | null | As reported by @thomwolf, while loading a JSON Lines file with "json" loading script, he gets
> json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 383)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2573/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2573/timeline | null | null | null | null | false | [
"This was actually a second error arising from a too small block-size in the json reader.\r\n\r\nFinding the right block size is difficult for the layman user"
] |
https://api.github.com/repos/huggingface/datasets/issues/4220 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4220/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4220/comments | https://api.github.com/repos/huggingface/datasets/issues/4220/events | https://github.com/huggingface/datasets/pull/4220 | 1,215,225,802 | PR_kwDODunzps42w5YO | 4,220 | Altered faiss installation comment | [] | closed | false | null | 3 | 2022-04-26T01:20:43Z | 2022-05-09T17:29:34Z | 2022-05-09T17:22:09Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4220/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4220/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4220.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4220",
"merged_at": "2022-05-09T17:22:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4220.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4220"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi ! Can you explain why this change is needed ?",
"Facebook recommends installing FAISS using conda (https://github.com/facebookresearch/faiss/blob/main/INSTALL.md). pip does not seem to have the latest version of FAISS. The latest version of faiss is 1.7.2 (https://anaconda.org/conda-forge/faiss), but the latest one available through pip is 1.5.3 (https://pypi.org/project/faiss/). "
] |
https://api.github.com/repos/huggingface/datasets/issues/541 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/541/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/541/comments | https://api.github.com/repos/huggingface/datasets/issues/541/events | https://github.com/huggingface/datasets/issues/541 | 688,521,224 | MDU6SXNzdWU2ODg1MjEyMjQ= | 541 | Best practices for training tokenizers with nlp | [] | closed | false | null | 1 | 2020-08-29T12:06:49Z | 2022-10-04T17:28:04Z | 2022-10-04T17:28:04Z | null | Hi, thank you for developing this library.
What do you think are the best practices for training tokenizers using `nlp`? In the document and examples, I could only find pre-trained tokenizers used. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/541/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/541/timeline | null | completed | null | null | false | [
"Docs that explain how to train a tokenizer with `datasets` are available here: https://huggingface.co/docs/tokenizers/training_from_memory#using-the-datasets-library"
] |
https://api.github.com/repos/huggingface/datasets/issues/2897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2897/comments | https://api.github.com/repos/huggingface/datasets/issues/2897/events | https://github.com/huggingface/datasets/pull/2897 | 993,798,386 | MDExOlB1bGxSZXF1ZXN0NzMxOTA0ODk4 | 2,897 | Add OpenAI's HumanEval dataset | [] | closed | false | null | 1 | 2021-09-11T09:37:47Z | 2021-09-16T15:02:11Z | 2021-09-16T15:02:11Z | null | This PR adds OpenAI's [HumanEval](https://github.com/openai/human-eval) dataset. The dataset consists of 164 handcrafted programming problems with solutions and unittests to verify solution. This dataset is useful to evaluate code generation models. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2897/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2897/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2897.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2897",
"merged_at": "2021-09-16T15:02:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2897.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2897"
} | true | [
"I just fixed the class name, and added `[More Information Needed]` in empty sections in case people want to complete the dataset card :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/986 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/986/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/986/comments | https://api.github.com/repos/huggingface/datasets/issues/986/events | https://github.com/huggingface/datasets/pull/986 | 755,047,470 | MDExOlB1bGxSZXF1ZXN0NTMwODM0MzYx | 986 | Add SciTLDR Dataset | [] | closed | false | null | 5 | 2020-12-02T08:11:16Z | 2020-12-02T18:37:22Z | 2020-12-02T18:02:59Z | null | Adds the SciTLDR Dataset by AI2
Added README card with tags to the best of my knowledge
Multi-target summaries or TLDRs of Scientific Documents | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/986/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/986/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/986.diff",
"html_url": "https://github.com/huggingface/datasets/pull/986",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/986.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/986"
} | true | [
"CI failures seem to be unrelated (related to `norwegian_ner`)\r\n```\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_norwegian_ner\r\n```",
"you can just rebase from master to fix the CI :) ",
"can you just rebase from master before we merge ?",
"Sorry, the rebase from master went horribly wrong, I guess I'll just open another PR\r\n\r\nClosing this one due to a mistake in rebasing :(",
"Continued in #1014 "
] |
https://api.github.com/repos/huggingface/datasets/issues/3261 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3261/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3261/comments | https://api.github.com/repos/huggingface/datasets/issues/3261/events | https://github.com/huggingface/datasets/issues/3261 | 1,052,346,381 | I_kwDODunzps4-uYgN | 3,261 | Scifi_TV_Shows: Having trouble getting viewer to find appropriate files | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 2 | 2021-11-12T19:25:19Z | 2021-12-21T10:24:10Z | 2021-12-21T10:24:10Z | null | ## Dataset viewer issue for '*Science Fiction TV Show Plots Corpus (Scifi_TV_Shows)*'
**Link:** [link](https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows)
I tried adding both a script (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/blob/main/Scifi_TV_Shows.py) and some dummy examples (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/tree/main/dummy), but the viewer still has a 404 error ("Not found. Maybe the cache is missing, or maybe the ressource does not exist."). I'm not sure what to try next. Thanks in advance!
Am I the one who added this dataset? Yes
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3261/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3261/timeline | null | completed | null | null | false | [
"Hi ! I think this is because `iter_archive` doesn't support ZIP files yet. See https://github.com/huggingface/datasets/issues/3272\r\n\r\nYou can navigate into the archive this way instead:\r\n```python\r\n# in split_generators\r\ndata_dir = dl_manager.download_and_extract(url)\r\ntrain_filepath = os.path.join(data_dir, \"all-sci-fi-data-train.txt\")\r\nreturn [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={\r\n \"filepath\": train_filepath,\r\n },\r\n ),\r\n...\r\n])\r\n\r\n# in generate_examples\r\nwith open(filepath, encoding=\"utf-8\") as f:\r\n ...\r\n```",
"It's working: https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/viewer/Scifi_TV_Shows/test\r\n\r\n<img width=\"1494\" alt=\"Capture d’écran 2021-12-21 à 11 23 51\" src=\"https://user-images.githubusercontent.com/1676121/146914068-f4b7225f-42c5-471d-9c73-2adac722162f.png\">\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2921 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2921/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2921/comments | https://api.github.com/repos/huggingface/datasets/issues/2921/events | https://github.com/huggingface/datasets/issues/2921 | 997,325,424 | I_kwDODunzps47cfpw | 2,921 | Using a list of multi-dim numpy arrays raises an error "can only convert 1-dimensional array values" | [] | closed | false | null | 0 | 2021-09-15T17:12:11Z | 2021-09-15T17:21:45Z | 2021-09-15T17:21:45Z | null | This error has been introduced in https://github.com/huggingface/datasets/pull/2361
To reproduce:
```python
import numpy as np
from datasets import Dataset
d = Dataset.from_dict({"a": [np.zeros((2, 2))]})
```
raises
```python
Traceback (most recent call last):
File "playground/ttest.py", line 5, in <module>
d = Dataset.from_dict({"a": [np.zeros((2, 2))]}).with_format("torch")
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_dataset.py", line 458, in from_dict
pa_table = InMemoryTable.from_pydict(mapping=mapping)
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 365, in from_pydict
return cls(pa.Table.from_pydict(*args, **kwargs))
File "pyarrow/table.pxi", line 1639, in pyarrow.lib.Table.from_pydict
File "pyarrow/array.pxi", line 332, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 223, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_writer.py", line 107, in __arrow_array__
out = pa.array(self.data, type=type)
File "pyarrow/array.pxi", line 306, in pyarrow.lib.array
File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2921/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2921/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4830/comments | https://api.github.com/repos/huggingface/datasets/issues/4830/events | https://github.com/huggingface/datasets/pull/4830 | 1,336,177,937 | PR_kwDODunzps49Cdro | 4,830 | Fix task tags in dataset cards | [] | closed | false | null | 2 | 2022-08-11T16:06:06Z | 2022-08-11T16:37:27Z | 2022-08-11T16:23:00Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4830/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4830/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4830.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4830",
"merged_at": "2022-08-11T16:23:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4830.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4830"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] |
https://api.github.com/repos/huggingface/datasets/issues/2857 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2857/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2857/comments | https://api.github.com/repos/huggingface/datasets/issues/2857/events | https://github.com/huggingface/datasets/pull/2857 | 984,093,938 | MDExOlB1bGxSZXF1ZXN0NzIzNTY5OTE4 | 2,857 | Update: Openwebtext - update size | [] | closed | false | null | 1 | 2021-08-31T17:11:03Z | 2022-02-15T10:38:03Z | 2021-09-07T09:44:32Z | null | Update the size of the Openwebtext dataset
I also regenerated the dataset_infos.json but the data file checksum didn't change, and the number of examples either (8013769 examples)
Close #2839, close #726. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2857/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2857/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2857.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2857",
"merged_at": "2021-09-07T09:44:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2857.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2857"
} | true | [
"merging since the CI error in unrelated to this PR and fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/4628 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4628/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4628/comments | https://api.github.com/repos/huggingface/datasets/issues/4628/events | https://github.com/huggingface/datasets/pull/4628 | 1,293,361,308 | PR_kwDODunzps46zvFJ | 4,628 | Fix time type `_arrow_to_datasets_dtype` conversion | [] | closed | false | null | 1 | 2022-07-04T16:20:15Z | 2022-07-07T14:08:38Z | 2022-07-07T13:57:12Z | null | Fix #4620
The issue stems from the fact that `pa.array([time_data]).type` returns `DataType(time64[unit])`, which doesn't expose the `unit` attribute, instead of `Time64Type(time64[unit])`. I believe this is a bug in PyArrow. Luckily, the both types have the same `str()`, so in this PR I call `pa.type_for_alias(str(type))` to convert them both to the `Time64Type(time64[unit])` format.
cc @severo | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4628/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4628/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4628.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4628",
"merged_at": "2022-07-07T13:57:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4628.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4628"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4369/comments | https://api.github.com/repos/huggingface/datasets/issues/4369/events | https://github.com/huggingface/datasets/pull/4369 | 1,240,245,642 | PR_kwDODunzps44CpCe | 4,369 | Add redirect to dataset script in the repo structure page | [] | closed | false | null | 1 | 2022-05-18T17:05:33Z | 2022-05-19T08:19:01Z | 2022-05-19T08:10:51Z | null | Following https://github.com/huggingface/hub-docs/pull/146 I added a redirection to the dataset scripts documentation in the repository structure page. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4369/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4369/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4369.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4369",
"merged_at": "2022-05-19T08:10:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4369.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4369"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4588 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4588/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4588/comments | https://api.github.com/repos/huggingface/datasets/issues/4588/events | https://github.com/huggingface/datasets/pull/4588 | 1,287,368,751 | PR_kwDODunzps46f2kF | 4,588 | Host head_qa data on the Hub and fix NonMatchingChecksumError | [] | closed | false | null | 3 | 2022-06-28T13:39:28Z | 2022-07-05T16:01:15Z | 2022-07-05T15:49:52Z | null | This PR:
- Hosts head_qa data on the Hub instead of Google Drive
- Fixes NonMatchingChecksumError
Fix https://huggingface.co/datasets/head_qa/discussions/1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4588/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4588/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4588.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4588",
"merged_at": "2022-07-05T15:49:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4588.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4588"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @albertvillanova ! Thanks for the fix ;)\r\nCan I safely checkout from this branch to build `datasets` or it is preferable to wait until all CI tests pass?\r\nThanks 🙏 ",
"@younesbelkada we have just merged this PR."
] |
https://api.github.com/repos/huggingface/datasets/issues/2038 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2038/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2038/comments | https://api.github.com/repos/huggingface/datasets/issues/2038/events | https://github.com/huggingface/datasets/issues/2038 | 830,036,875 | MDU6SXNzdWU4MzAwMzY4NzU= | 2,038 | outdated dataset_infos.json might fail verifications | [] | closed | false | null | 2 | 2021-03-12T11:41:54Z | 2021-03-16T16:27:40Z | 2021-03-16T16:27:40Z | null | The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc..
Could you please update this file or point me how to update this file?
Thank you. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2038/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2038/timeline | null | completed | null | null | false | [
"Hi ! Thanks for reporting.\r\n\r\nTo update the dataset_infos.json you can run:\r\n```\r\ndatasets-cli test ./datasets/doc2dial --all_configs --save_infos --ignore_verifications\r\n```",
"Fixed by #2041, thanks again @songfeng !"
] |
https://api.github.com/repos/huggingface/datasets/issues/727 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/727/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/727/comments | https://api.github.com/repos/huggingface/datasets/issues/727/events | https://github.com/huggingface/datasets/issues/727 | 719,386,366 | MDU6SXNzdWU3MTkzODYzNjY= | 727 | Parallel downloads progress bar flickers | [] | open | false | null | 0 | 2020-10-12T13:36:05Z | 2020-10-12T13:36:05Z | null | null | When there are parallel downloads using the download manager, the tqdm progress bar flickers since all the progress bars are on the same line.
To fix that we could simply specify `position=i` for i=0 to n the number of files to download when instantiating the tqdm progress bar.
Another way would be to have one "master" progress bar that tracks the number of finished downloads, and then one progress bar per process that show the current downloads. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/727/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/727/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5589 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5589/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5589/comments | https://api.github.com/repos/huggingface/datasets/issues/5589/events | https://github.com/huggingface/datasets/pull/5589 | 1,603,535,704 | PR_kwDODunzps5K9K1i | 5,589 | Revert "pass the dataset features to the IterableDataset.from_generator" | [] | closed | false | null | 5 | 2023-02-28T17:52:04Z | 2023-03-21T14:21:45Z | 2023-03-21T14:18:18Z | null | This reverts commit b91070b9c09673e2e148eec458036ab6a62ac042 (temporarily)
It hurts iterable dataset performance a lot (e.g. x4 slower because it encodes+decodes images unnecessarily). I think we need to fix this before re-adding it
cc @mariosasko @Hubert-Bonisseur | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5589/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5589/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5589.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5589",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5589.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5589"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008442 / 0.011353 (-0.002911) | 0.004567 / 0.011008 (-0.006441) | 0.100688 / 0.038508 (0.062180) | 0.029568 / 0.023109 (0.006459) | 0.306993 / 0.275898 (0.031095) | 0.362626 / 0.323480 (0.039146) | 0.006983 / 0.007986 (-0.001002) | 0.003424 / 0.004328 (-0.000905) | 0.079050 / 0.004250 (0.074799) | 0.036087 / 0.037052 (-0.000966) | 0.318205 / 0.258489 (0.059716) | 0.353882 / 0.293841 (0.060041) | 0.033091 / 0.128546 (-0.095455) | 0.011468 / 0.075646 (-0.064178) | 0.321125 / 0.419271 (-0.098146) | 0.040645 / 0.043533 (-0.002888) | 0.309827 / 0.255139 (0.054688) | 0.344848 / 0.283200 (0.061648) | 0.087100 / 0.141683 (-0.054583) | 1.465123 / 1.452155 (0.012968) | 1.499457 / 1.492716 (0.006741) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.171619 / 0.018006 (0.153613) | 0.410198 / 0.000490 (0.409709) | 0.002391 / 0.000200 (0.002191) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022913 / 0.037411 (-0.014499) | 0.097275 / 0.014526 (0.082749) | 0.103902 / 0.176557 (-0.072655) | 0.148855 / 0.737135 (-0.588281) | 0.107247 / 0.296338 (-0.189092) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413139 / 0.215209 (0.197930) | 4.131760 / 2.077655 (2.054105) | 1.854491 / 1.504120 (0.350371) | 1.625524 / 1.541195 (0.084329) | 1.666665 / 1.468490 (0.198175) | 0.687105 / 4.584777 (-3.897672) | 3.327124 / 3.745712 (-0.418588) | 1.830820 / 5.269862 (-3.439042) | 1.147930 / 4.565676 (-3.417746) | 0.081586 / 0.424275 (-0.342689) | 0.012422 / 0.007607 (0.004815) | 0.523723 / 0.226044 (0.297678) | 5.246977 / 2.268929 (2.978049) | 2.288350 / 55.444624 (-53.156275) | 1.933740 / 6.876477 (-4.942737) | 1.954356 / 2.142072 (-0.187716) | 0.804434 / 4.805227 (-4.000793) | 0.147621 / 6.500664 (-6.353043) | 0.064835 / 0.075469 (-0.010634) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244841 / 1.841788 (-0.596947) | 13.758465 / 8.074308 (5.684157) | 13.984576 / 10.191392 (3.793184) | 0.144860 / 0.680424 (-0.535564) | 0.028616 / 0.534201 (-0.505584) | 0.401928 / 0.579283 (-0.177355) | 0.415294 / 0.434364 (-0.019069) | 0.476483 / 0.540337 (-0.063854) | 0.569257 / 1.386936 (-0.817679) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006556 / 0.011353 (-0.004797) | 0.004502 / 0.011008 (-0.006507) | 0.074828 / 0.038508 (0.036319) | 0.027537 / 0.023109 (0.004427) | 0.339961 / 0.275898 (0.064063) | 0.372491 / 0.323480 (0.049011) | 0.005010 / 0.007986 (-0.002976) | 0.004624 / 0.004328 (0.000295) | 0.074459 / 0.004250 (0.070208) | 0.037539 / 0.037052 (0.000486) | 0.341031 / 0.258489 (0.082542) | 0.383397 / 0.293841 (0.089556) | 0.031706 / 0.128546 (-0.096840) | 0.011542 / 0.075646 (-0.064104) | 0.084882 / 0.419271 (-0.334389) | 0.041860 / 0.043533 (-0.001673) | 0.338699 / 0.255139 (0.083560) | 0.365666 / 0.283200 (0.082467) | 0.088966 / 0.141683 (-0.052717) | 1.502493 / 1.452155 (0.050339) | 1.570746 / 1.492716 (0.078030) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217547 / 0.018006 (0.199541) | 0.392407 / 0.000490 (0.391918) | 0.000388 / 0.000200 (0.000188) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024571 / 0.037411 (-0.012840) | 0.099259 / 0.014526 (0.084734) | 0.107850 / 0.176557 (-0.068707) | 0.157686 / 0.737135 (-0.579449) | 0.109761 / 0.296338 (-0.186578) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434791 / 0.215209 (0.219582) | 4.323099 / 2.077655 (2.245444) | 2.063610 / 1.504120 (0.559490) | 1.866136 / 1.541195 (0.324941) | 1.910185 / 1.468490 (0.441695) | 0.696584 / 4.584777 (-3.888193) | 3.398017 / 3.745712 (-0.347695) | 1.848473 / 5.269862 (-3.421388) | 1.168238 / 4.565676 (-3.397438) | 0.083222 / 0.424275 (-0.341053) | 0.012332 / 0.007607 (0.004725) | 0.538953 / 0.226044 (0.312909) | 5.421273 / 2.268929 (3.152344) | 2.499877 / 55.444624 (-52.944747) | 2.161853 / 6.876477 (-4.714624) | 2.183941 / 2.142072 (0.041868) | 0.803916 / 4.805227 (-4.001311) | 0.150266 / 6.500664 (-6.350398) | 0.067399 / 0.075469 (-0.008070) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.280479 / 1.841788 (-0.561309) | 13.728074 / 8.074308 (5.653766) | 12.946098 / 10.191392 (2.754706) | 0.128459 / 0.680424 (-0.551965) | 0.016567 / 0.534201 (-0.517634) | 0.374461 / 0.579283 (-0.204822) | 0.386973 / 0.434364 (-0.047391) | 0.459754 / 0.540337 (-0.080583) | 0.543870 / 1.386936 (-0.843066) |\n\n</details>\n</details>\n\n\n",
"Instead of reverting the change, maybe we can use the same conversion in `to_iterable_dataset` as in `ArrowBasedBuilder._as_streaming_dataset` to avoid decoding images twice?",
"True, let me take a look",
"Closing in favor of https://github.com/huggingface/datasets/pull/5655"
] |
https://api.github.com/repos/huggingface/datasets/issues/1663 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1663/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1663/comments | https://api.github.com/repos/huggingface/datasets/issues/1663/events | https://github.com/huggingface/datasets/pull/1663 | 775,914,320 | MDExOlB1bGxSZXF1ZXN0NTQ2NTAzMjg5 | 1,663 | update saving and loading methods for faiss index so to accept path l… | [] | closed | false | null | 1 | 2020-12-29T14:15:37Z | 2021-01-18T09:27:23Z | 2021-01-18T09:27:23Z | null | - Update saving and loading methods for faiss index so to accept path like objects from pathlib
The current code only supports using a string type to save and load a faiss index. This change makes it possible to use a string type OR a Path from [pathlib](https://docs.python.org/3/library/pathlib.html). The codes becomes a more intuitive this way I think. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1663/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1663/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1663.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1663",
"merged_at": "2021-01-18T09:27:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1663.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1663"
} | true | [
"Seems ok for me, what do you think @lhoestq ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/2809 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2809/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2809/comments | https://api.github.com/repos/huggingface/datasets/issues/2809/events | https://github.com/huggingface/datasets/pull/2809 | 971,902,613 | MDExOlB1bGxSZXF1ZXN0NzEzNTc2Njcz | 2,809 | Add Beans Dataset | [] | closed | false | null | 0 | 2021-08-16T16:22:33Z | 2021-08-26T11:42:27Z | 2021-08-26T11:42:27Z | null | Adds the [beans](https://github.com/AI-Lab-Makerere/ibean/) image classification dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2809/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2809/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2809.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2809",
"merged_at": "2021-08-26T11:42:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2809.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2809"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4993/comments | https://api.github.com/repos/huggingface/datasets/issues/4993/events | https://github.com/huggingface/datasets/pull/4993 | 1,379,044,435 | PR_kwDODunzps4_QYas | 4,993 | fix: avoid casting tuples after Dataset.map | [] | closed | false | null | 1 | 2022-09-20T08:45:16Z | 2022-09-20T16:11:27Z | 2022-09-20T13:08:29Z | null | This PR updates features.py to avoid casting tuples to lists when reading the results of Dataset.map as suggested by @lhoestq [here](https://github.com/huggingface/datasets/issues/4676#issuecomment-1187371367) in https://github.com/huggingface/datasets/issues/4676.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4993/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4993/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4993.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4993",
"merged_at": "2022-09-20T13:08:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4993.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4993"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/987/comments | https://api.github.com/repos/huggingface/datasets/issues/987/events | https://github.com/huggingface/datasets/pull/987 | 755,059,469 | MDExOlB1bGxSZXF1ZXN0NTMwODQ0MTQ4 | 987 | Add OPUS DOGC dataset | [] | closed | false | null | 1 | 2020-12-02T08:30:32Z | 2020-12-04T13:27:41Z | 2020-12-04T13:27:41Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/987/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/987/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/987.diff",
"html_url": "https://github.com/huggingface/datasets/pull/987",
"merged_at": "2020-12-04T13:27:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/987.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/987"
} | true | [
"merging since the CI is fixed on master"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/4580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4580/comments | https://api.github.com/repos/huggingface/datasets/issues/4580/events | https://github.com/huggingface/datasets/issues/4580 | 1,286,312,912 | I_kwDODunzps5Mq5PQ | 4,580 | Dataset Viewer issue for multi_news | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 2 | 2022-06-27T20:25:25Z | 2022-06-28T14:08:48Z | 2022-06-28T14:08:48Z | null | ### Link
https://huggingface.co/datasets/multi_news
### Description
Not sure what the index error is referring to here:
```
Status code: 400
Exception: IndexError
Message: list index out of range
```
### Owner
No | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4580/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4580/timeline | null | completed | null | null | false | [
"Thanks for reporting, @lewtun.\r\n\r\nI forced the refreshing of the preview and it worked OK for train and validation splits.\r\n\r\nI guess the error has to do with the data files being hosted at Google Drive: this gives errors when requested automatically using scripts.\r\nWe should host them to fix the error. Let's see if the license allows that.",
"I guess we can host the data: https://github.com/Alex-Fabbri/Multi-News/blob/master/LICENSE.txt"
] |
https://api.github.com/repos/huggingface/datasets/issues/1988 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1988/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1988/comments | https://api.github.com/repos/huggingface/datasets/issues/1988/events | https://github.com/huggingface/datasets/issues/1988 | 822,324,605 | MDU6SXNzdWU4MjIzMjQ2MDU= | 1,988 | Readme.md is misleading about kinds of datasets? | [] | closed | false | null | 1 | 2021-03-04T17:04:20Z | 2021-08-04T18:05:23Z | 2021-08-04T18:05:23Z | null | Hi!
At the README.MD, you say: "efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. "
But here:
https://github.com/huggingface/datasets/blob/master/templates/new_dataset_script.py#L82-L117
You mention other kinds of datasets, with images and so on. I'm confused.
Is it possible to use it to store, say, imagenet locally? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1988/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1988/timeline | null | completed | null | null | false | [
"Hi ! Yes it's possible to use image data. There are already a few of them available (MNIST, CIFAR..)"
] |
https://api.github.com/repos/huggingface/datasets/issues/1648 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1648/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1648/comments | https://api.github.com/repos/huggingface/datasets/issues/1648/events | https://github.com/huggingface/datasets/pull/1648 | 775,542,360 | MDExOlB1bGxSZXF1ZXN0NTQ2MjAxNTQ0 | 1,648 | Update README.md | [] | closed | false | null | 0 | 2020-12-28T18:59:06Z | 2020-12-29T10:39:14Z | 2020-12-29T10:39:14Z | null | added dataset summary | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1648/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1648/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1648.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1648",
"merged_at": "2020-12-29T10:39:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1648.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1648"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3994 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3994/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3994/comments | https://api.github.com/repos/huggingface/datasets/issues/3994/events | https://github.com/huggingface/datasets/pull/3994 | 1,178,211,138 | PR_kwDODunzps404wWu | 3,994 | Change audio column from string path to Audio feature in ASR task | [] | closed | false | null | 0 | 2022-03-23T14:34:52Z | 2022-03-23T15:43:43Z | 2022-03-23T15:43:43Z | null | Will fix #3990 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3994/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3994/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/3994.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3994",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3994.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3994"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3111/comments | https://api.github.com/repos/huggingface/datasets/issues/3111/events | https://github.com/huggingface/datasets/issues/3111 | 1,030,598,983 | I_kwDODunzps49bbFH | 3,111 | concatenate_datasets removes ClassLabel typing. | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-10-19T18:05:31Z | 2021-10-21T14:50:21Z | 2021-10-21T14:50:21Z | null | ## Describe the bug
When concatenating two datasets, we lose typing of ClassLabel columns.
I can work on this if this is a legitimate bug,
## Steps to reproduce the bug
```python
import datasets
from datasets import Dataset, ClassLabel, Value, concatenate_datasets
DS_LEN = 100
my_dataset = Dataset.from_dict(
{
"sentence": [f"{chr(i % 10)}" for i in range(DS_LEN)],
"label": [i % 2 for i in range(DS_LEN)]
}
)
my_predictions = Dataset.from_dict(
{
"pred": [(i + 1) % 2 for i in range(DS_LEN)]
}
)
my_dataset = my_dataset.cast(datasets.Features({"sentence": Value("string"), "label": ClassLabel(2, names=["POS", "NEG"])}))
print("Original")
print(my_dataset)
print(my_dataset.features)
concat_ds = concatenate_datasets([my_dataset, my_predictions], axis=1)
print("Concatenated")
print(concat_ds)
print(concat_ds.features)
```
## Expected results
The features of `concat_ds` should contain ClassLabel.
## Actual results
On master, I get:
```
{'sentence': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None), 'pred': Value(dtype='int64', id=None)}
```
## Environment info
- `datasets` version: 1.14.1.dev0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.11
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3111/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3111/timeline | null | completed | null | null | false | [
"Something like this would fix it I think: https://github.com/huggingface/datasets/compare/master...Dref360:HF-3111/concatenate_types?expand=1"
] |
https://api.github.com/repos/huggingface/datasets/issues/2230 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2230/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2230/comments | https://api.github.com/repos/huggingface/datasets/issues/2230/events | https://github.com/huggingface/datasets/issues/2230 | 859,817,159 | MDU6SXNzdWU4NTk4MTcxNTk= | 2,230 | Keys yielded while generating dataset are not being checked | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 9 | 2021-04-16T13:29:47Z | 2021-05-10T17:31:21Z | 2021-05-10T17:31:21Z | null | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196
Even after having a tuple as key, the dataset is generated without any warning.
Also, as tested in the case of `anli` dataset (I tweeked the dataset script to use `1` as a key for every example):
```
>>> import datasets
>>> nik = datasets.load_dataset('anli')
Downloading and preparing dataset anli/plain_text (download: 17.76 MiB, generated: 73.55 MiB, post-processed: Unknown size, total: 91.31 MiB) to C:\Users\nikhil\.cache\huggingface\datasets\anli\plain_text\0.1.0\43fa2c99c10bf8478f1fa0860f7b122c6b277c4c41306255b7641257cf4e3299...
0 examples [00:00, ? examples/s]1 {'uid': '0fd0abfb-659e-4453-b196-c3a64d2d8267', 'premise': 'The Parma trolleybus system (Italian: "Rete filoviaria di Parma" ) forms part of the public transport network of the city and "comune" of Parma, in the region of Emilia-Romagna, northern Italy. In operation since 1953, the system presently comprises four urban routes.', 'hypothesis': 'The trolleybus system has over 2 urban routes', 'label': 'entailment', 'reason': ''}
2021-04-16 12:38:14.483968: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
1 examples [00:01, 1.87s/ examples]1 {'uid': '7ed72ff4-40b7-4f8a-b1b9-6c612aa62c84', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Sharron Macready was a popular character through the 1980's.", 'label': 'neutral', 'reason': ''}
1 {'uid': '5d2930a3-62ac-485d-94d7-4e36cbbcd7b5', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Bastedo didn't keep any pets because of her views on animal rights.", 'label': 'neutral', 'reason': ''}
1 {'uid': '324db753-ddc9-4a85-a825-f09e2e5aebdd', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Alexandra Bastedo was named by her mother.', 'label': 'neutral', 'reason': ''}
1 {'uid': '4874f429-da0e-406a-90c7-22240ff3ddf8', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Bastedo cared for all the animals that inhabit the earth.', 'label': 'neutral', 'reason': ''}
```
Here also, the dataset was generated successfuly even hough it had same keys without any warning.
The reason appears to stem from here:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L988
Here, although it has access to every key, but it is not being checked and the example is written directly:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L992
I would like to take this issue if you allow me. Thank You! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2230/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2230/timeline | null | completed | null | null | false | [
"Hi ! Indeed there's no verification on the uniqueness nor the types of the keys.\r\nDo you already have some ideas of what you would like to implement and how ?",
"Hey @lhoestq, thank you so much for the opportunity.\r\nAlthough I haven't had much experience with the HF Datasets code, after a careful look at how the `ArrowWriter` functions, I think we can implement this as follows:\r\n\r\n1. First, we would have to update the `ArrowWriter.write()` function here:\r\nhttps://github.com/huggingface/datasets/blob/fcd3c3c8e3b1d9a2f3686a496082e21f06591380/src/datasets/arrow_writer.py#L296\r\nso that it accepts an additional argument `key` which would be appended along with the example here after hashing.\r\n\r\n2. Then, we would need to create a `Hasher` class which will take the key as its input and return a hash for it (We might need to use some hash salt which can be passed to the ArrowWriter.writer() with value equal to the `split_name` for differentiating between same keys of different splits)\r\n\r\n We can use the `hashlib.md5` function for hashing which will conert each key to its byte code before hashing (depending on the data type of the key) **Thus, the `key` type will be verified here**.\r\n\r\n3. Now, we would have to edit this\r\nhttps://github.com/huggingface/datasets/blob/fcd3c3c8e3b1d9a2f3686a496082e21f06591380/src/datasets/arrow_writer.py#L257\r\n so that it iterates over each `(hash, example)` pair (sorted according to hash). We can then simply **check whether each hash is different from the previous hash** (since they will be sorted)\r\n\r\nHowever, since I'm not very familiar with how the data is being written on disk in the form of a table, I might need some guidance for Step 3. \r\nPlease let me know your thought on this. Thanks!",
"Interesting !\r\nWe keep the dataset sorted in the order examples are generated by the builder (we expect the dataset builders to generate examples in deterministic order). Therefore I don't think we should shuffle the examples with the hashing. Let me know what you think.\r\nOther that that, I really like the idea of checking for keys duplicates in `write_examples_on_file` :)\r\n\r\nThis looks like a great plan ! Feel free to open a PR and ping me if you have questions or if I can help\r\n",
"@lhoestq I'm glad you liked the idea!\r\nI think that since the keys will be unique and deterministic in the nature themselves, so even if we shuffle the examples according to the hash, a deterministic order would still be maintained (as the keys will always have the same hash, whenever the dataset is generated). \r\nAnd since, we are not dealing with time series data (which would require the data to be in original order), I don't think the order of examples would matter much, as long as the order is deterministic and constant for all users.\r\n\r\nI think that this is also what was originally envisioned as mentioned in the documentation here:\r\nhttps://github.com/huggingface/datasets/blob/6775661b19d2ec339784f3d84553a3996a1d86c3/src/datasets/builder.py#L973\r\n\r\nAlso, if we avoid this, we would need to keep track of all the hashed keys in some place and compare each individual key with all others. This can cause some major overhead as each dataset consists of tens of thousands of examples.\r\nLet me know your thoughts in it! I would be opening a PR soon :)",
"When users load their own data, they expect the order to stay the same. I think that shuffling the data can make things inconvenient.\r\n\r\n> I think that this is also what was originally envisioned as mentioned in the documentation here:\r\n\r\nThis part was originally developed by tensorflow datasets, and tensorflow datasets indeed does the shuffling. However in this library this is probably not what we want in the general case. But if @albertvillanova and @thomwolf you have opinions on this please let us know.\r\n\r\n> Also, if we avoid this, we would need to keep track of all the hashed keys in some place and compare each individual key with all others. This can cause some major overhead as each dataset consists of tens of thousands of examples.\r\n\r\nMaybe we cam simply keep track of the hashes of of each batch being written ? The size of the batch when the data are save in arrow is 10 000 examples. This would only ensure that we don't have duplicates in each batch, but there might still be duplicates across batches. For 10 000 examples the hashes can just be stored as a python `set`.\r\n\r\nOtherwise if we want full deduplication, we need an extra tool that allows to temporarily save and query hashes that may need to use disk space rather than memory.",
"Yes I think we want to keep the original order by default and only shuffle when the user ask for it (for instance by calling `dataset.shuffle()`). That’s how I had it in mind originally.",
"Hey @lhoestq, I just had a more in-depth look at the original TFDS code about why the keys and hash were used in the first place.\r\n\r\nIn my opinion, the only use that the `hash(key)` serves is that it allows us to shuffle the examples in a deterministic order (as each example will always yield the same key and thus, the same hash on every system) so that the same dataset is generated for each user, irrespective of the order the examples are yielded by the dataset builder on different user systems.\r\n\r\nOtherwise, if we are not shuffling, then while yielding and writing the data, after getting the key and hashing it for an example, I can't quite see the use of the hash or the key. The hash will simply be generated for each example but not actually used anywhere?\r\n\r\n@lhoestq @thomwolf It would be great if you could explain a bit more about the usage of keys. Thanks!\r\n",
"In `datasets` the keys are currently ignored.\r\nFor shuffling we don't use the keys. Instead we shuffle an array of indices. Since both the original order of the dataset and the indices shuffling are deterministic, then `dataset.shuffle` is deterministic as well.\r\nWe can use it to:\r\n1. detect duplicates\r\n2. verify that the generation order is indeed deterministic\r\n3. maybe more ?",
"Thanks a lot @lhoestq. I think I understand what we need to do now. The keys can indeed be used for detecting duplicates in generated examples as well as ensuring the order.\r\n\r\n> Maybe we cam simply keep track of the hashes of of each batch being written ? The size of the batch when the data are save in arrow is 10 000 examples. This would only ensure that we don't have duplicates in each batch,\r\n\r\nI think that checking for duplicates in every batch independently would be sufficient as the probability of collisions using something like `MD5` is very low. I would be opening a draft PR soon. It would be great to have your guidance. Thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4282/comments | https://api.github.com/repos/huggingface/datasets/issues/4282/events | https://github.com/huggingface/datasets/pull/4282 | 1,225,616,545 | PR_kwDODunzps43TZYL | 4,282 | Don't do unnecessary list type casting to avoid replacing None values by empty lists | [] | closed | false | null | 3 | 2022-05-04T16:37:01Z | 2022-05-06T10:43:58Z | 2022-05-06T10:37:00Z | null | In certain cases, `None` values are replaced by empty lists when casting feature types.
It happens every time you cast an array of nested lists like [None, [0, 1, 2, 3]] to a different type (to change the integer precision for example). In this case you'd get [[], [0, 1, 2, 3]] for example. This issue comes from PyArrow, see the discussion in https://github.com/huggingface/datasets/issues/3676
This issue also happens when no type casting is needed, because casting is supposed to be a no-op in this case. But as https://github.com/huggingface/datasets/issues/3676 shown, it's not the case and `None` are replaced by empty lists even if we cast to the exact same type.
In this PR I just workaround this bug in the case where no type casting is needed. In particular, I only call `pa.ListArray.from_arrays` only when necessary.
I also added a warning when some `None` are effectively replaced by empty lists. I wanted to raise an error in this case, but maybe we should wait a major update to do so
This PR fixes this particular case, that is occurring in `run_qa.py` in `transformers`:
```python
from datasets import Dataset
ds = Dataset.from_dict({"a": range(4)})
ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"])
print(ds.to_pandas())
# before:
# b
# 0 [None, [0]]
# 1 [[], [0]]
# 2 [[], [0]]
# 3 [[], [0]]
#
# now:
# b
# 0 [None, [0]]
# 1 [None, [0]]
# 2 [None, [0]]
# 3 [None, [0]]
```
cc @sgugger | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4282/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4282/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4282.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4282",
"merged_at": "2022-05-06T10:37:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4282.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4282"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Quick question about the message in the warning. You say \"will be fixed in a future major version\" but don't you mean \"will raise an error in a future major version\"?",
"Right ! Good catch, thanks, I updated the message to say \"will raise an error in a future major version\""
] |
https://api.github.com/repos/huggingface/datasets/issues/2612 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2612/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2612/comments | https://api.github.com/repos/huggingface/datasets/issues/2612/events | https://github.com/huggingface/datasets/pull/2612 | 940,604,512 | MDExOlB1bGxSZXF1ZXN0Njg2NjUwMjk3 | 2,612 | Return Python float instead of numpy.float64 in sklearn metrics | [] | closed | false | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
} | 3 | 2021-07-09T09:48:09Z | 2021-07-12T14:12:53Z | 2021-07-09T13:03:54Z | null | This PR converts the return type of all `sklearn` metrics to be Python `float` instead of `numpy.float64`.
The reason behind this is that our Hub evaluation framework relies on converting benchmark-specific metrics to YAML ([example](https://huggingface.co/datasets/autonlp/autonlp-benchmark-raft-neelalex__raft-test-neelalex__raft-predictions-3/blob/main/README.md#L11)) and the `numpy.float64` format produces garbage like:
```python
import yaml
from datasets import load_metric
metric = load_metric("accuracy")
score = metric.compute(predictions=[0,1], references=[0,1])
print(yaml.dump(score["accuracy"])) # output below
# !!python/object/apply:numpy.core.multiarray.scalar
# - !!python/object/apply:numpy.dtype
# args:
# - f8
# - false
# - true
# state: !!python/tuple
# - 3
# - <
# - null
# - null
# - null
# - -1
# - -1
# - 0
# - !!binary |
# AAAAAAAA8D8=
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2612/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2612/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2612.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2612",
"merged_at": "2021-07-09T13:03:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2612.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2612"
} | true | [
"I opened an issue on the `sklearn` repo to understand why `numpy.float64` is the default: https://github.com/scikit-learn/scikit-learn/discussions/20490",
"It could be surprising at first to use `tolist()` on numpy scalars but it works ^^",
"did the same for Pearsonr here: https://github.com/huggingface/datasets/pull/2614"
] |
https://api.github.com/repos/huggingface/datasets/issues/6010 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6010/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6010/comments | https://api.github.com/repos/huggingface/datasets/issues/6010/events | https://github.com/huggingface/datasets/issues/6010 | 1,793,838,152 | I_kwDODunzps5q68xI | 6,010 | Improve `Dataset`'s string representation | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 2 | 2023-07-07T16:38:03Z | 2023-07-16T13:00:18Z | null | null | Currently, `Dataset.__repr__` outputs a dataset's column names and the number of rows. We could improve it by printing its features and the first few rows.
We should also implement `_repr_html_` to have a rich HTML representation in notebooks/Streamlit. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6010/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6010/timeline | null | null | null | null | false | [
"I want to take a shot at this if possible ",
"Yes, feel free to work on this.\r\n\r\nYou can check the PyArrow Table `__repr__` and Polars DataFrame `__repr__`/`_repr_html_` implementations for some pointers/ideas."
] |
https://api.github.com/repos/huggingface/datasets/issues/2029 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2029/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2029/comments | https://api.github.com/repos/huggingface/datasets/issues/2029/events | https://github.com/huggingface/datasets/issues/2029 | 829,097,290 | MDU6SXNzdWU4MjkwOTcyOTA= | 2,029 | Loading a faiss index KeyError | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 4 | 2021-03-11T12:16:13Z | 2021-03-12T00:21:09Z | 2021-03-12T00:21:09Z | null | I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation.
The basic steps are:
1. Create a dataset (dataset1)
2. Create an embeddings column using DPR
3. Add a faiss index to the dataset
4. Save faiss index to a file
5. Create a new dataset (dataset2) with the same text and label information as dataset1
6. Try to load the faiss index from file to dataset2
7. Get `KeyError: "Column embeddings not in the dataset"`
I've made a colab notebook that should show exactly what I did. Please switch to GPU runtime; I didn't check on CPU.
https://colab.research.google.com/drive/1X0S9ZuZ8k0ybcoei4w7so6dS_WrABmIx?usp=sharing
Ubuntu Version
VERSION="18.04.5 LTS (Bionic Beaver)"
datasets==1.4.1
faiss==1.5.3
faiss-gpu==1.7.0
torch==1.8.0+cu101
transformers==4.3.3
NVIDIA-SMI 460.56
Driver Version: 460.32.03
CUDA Version: 11.2
Tesla K80
I was basically following the steps here: https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index
I included the exact code from the documentation at the end of the notebook to show that they don't work either.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2029/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2029/timeline | null | completed | null | null | false | [
"In your code `dataset2` doesn't contain the \"embeddings\" column, since it is created from the pandas DataFrame with columns \"text\" and \"label\".\r\n\r\nTherefore when you call `dataset2[embeddings_name]`, you get a `KeyError`.\r\n\r\nIf you want the \"embeddings\" column back, you can create `dataset2` with\r\n```python\r\ndataset2 = load_from_disk(dataset_filename)\r\n```\r\nwhere `dataset_filename` is the place where you saved you dataset with the embeddings in the first place.",
"Ok in that case HF should fix their misleading example at https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index \r\n\r\nI copy-pasted it here.\r\n\r\n> When you are done with your queries you can save your index on disk:\r\n> \r\n> ```python\r\n> ds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss')\r\n> ```\r\n> Then reload it later:\r\n> \r\n> ```python\r\n> ds = load_dataset('crime_and_punish', split='train[:100]')\r\n> ds.load_faiss_index('embeddings', 'my_index.faiss')\r\n> ```",
"Hi !\r\n\r\nThe code of the example is valid.\r\nAn index is a search engine, it's not considered a column of a dataset.\r\nWhen you do `ds.load_faiss_index(\"embeddings\", 'my_index.faiss')`, it attaches an index named \"embeddings\" to the dataset but it doesn't re-add the \"embeddings\" column. You can list the indexes of a dataset by using `ds.list_indexes()`.\r\n\r\nIf I understand correctly by reading this example you thought that it was re-adding the \"embeddings\" column.\r\nThis looks misleading indeed, and we should add a note to make it more explicit that it doesn't store the column that was used to build the index.\r\n\r\nFeel free to open a PR to suggest an improvement on the documentation if you want to contribute :)",
"> If I understand correctly by reading this example you thought that it was re-adding the \"embeddings\" column.\r\nYes. I was trying to use the dataset in RAG and it complained that the dataset didn't have the right columns. No problems when loading the dataset with `load_from_disk` and then doing `load_faiss_index`\r\n\r\nWhat I learned was\r\n1. column and index are different\r\n2. loading the index does not create a column\r\n3. the column is not needed to be able to use the index\r\n4. RAG needs both the embeddings column and the index\r\n\r\nIf I can come up with a way to articulate this in the right spot in the docs, I'll open a PR"
] |
https://api.github.com/repos/huggingface/datasets/issues/4419 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4419/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4419/comments | https://api.github.com/repos/huggingface/datasets/issues/4419/events | https://github.com/huggingface/datasets/issues/4419 | 1,252,652,896 | I_kwDODunzps5Kqfdg | 4,419 | Update `unittest` assertions over tuples from `assertEqual` to `assertTupleEqual` | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 3 | 2022-05-30T12:13:18Z | 2022-09-30T16:01:37Z | 2022-09-30T16:01:37Z | null | **Is your feature request related to a problem? Please describe.**
So this is more a readability improvement rather than a proposal, wouldn't it be better to use `assertTupleEqual` over the tuples rather than `assertEqual`? As `unittest` added that function in `v3.1`, as detailed at https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual, so maybe it's worth updating.
Find an example of an `assertEqual` over a tuple in 🤗 `datasets` unit tests over an `ArrowDataset` at https://github.com/huggingface/datasets/blob/0bb47271910c8a0b628dba157988372307fca1d2/tests/test_arrow_dataset.py#L570
**Describe the solution you'd like**
Start slowly replacing all the `assertEqual` statements with `assertTupleEqual` if the assertion is done over a Python tuple, as we're doing with the Python lists using `assertListEqual` rather than `assertEqual`.
**Additional context**
If so, please let me know and I'll try to go over the tests and create a PR if applicable, otherwise, if you consider this should stay as `assertEqual` rather than `assertSequenceEqual` feel free to close this issue! Thanks 🤗
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4419/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4419/timeline | null | completed | null | null | false | [
"Hi! If the only goal is to improve readability, it's better to use `assertTupleEqual` than `assertSequenceEqual` for Python tuples. Also, note that this function is called internally by `assertEqual`, but I guess we can accept a PR to be more verbose.",
"Hi @mariosasko, right! I'll update the issue title/desc with `assertTupleEqual` even though as you said it seems to be internally using `assertEqual` so I'm not sure whether it's worth it or not...\r\n\r\nhttps://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual",
"I thought we were supposed to move gradually from `unittest` to `pytest`..."
] |
https://api.github.com/repos/huggingface/datasets/issues/2892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2892/comments | https://api.github.com/repos/huggingface/datasets/issues/2892/events | https://github.com/huggingface/datasets/issues/2892 | 993,274,572 | MDU6SXNzdWU5OTMyNzQ1NzI= | 2,892 | Error when encoding a dataset with None objects with a Sequence feature | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-09-10T14:11:43Z | 2021-09-13T14:18:13Z | 2021-09-13T14:17:42Z | null | There is an error when encoding a dataset with None objects with a Sequence feature
To reproduce:
```python
from datasets import Dataset, Features, Value, Sequence
data = {"a": [[0], None]}
features = Features({"a": Sequence(Value("int32"))})
dataset = Dataset.from_dict(data, features=features)
```
raises
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-24-40add67f8751> in <module>
2 data = {"a": [[0], None]}
3 features = Features({"a": Sequence(Value("int32"))})
----> 4 dataset = Dataset.from_dict(data, features=features)
[...]
~/datasets/features.py in encode_nested_example(schema, obj)
888 if isinstance(obj, str): # don't interpret a string as a list
889 raise ValueError("Got a string but expected a list instead: '{}'".format(obj))
--> 890 return [encode_nested_example(schema.feature, o) for o in obj]
891 # Object with special encoding:
892 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
TypeError: 'NoneType' object is not iterable
```
Instead, if should run without error, as if the `features` were not passed | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2892/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2892/timeline | null | completed | null | null | false | [
"This has been fixed by https://github.com/huggingface/datasets/pull/2900\r\nWe're doing a new release 1.12 today to make the fix available :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/2704 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2704/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2704/comments | https://api.github.com/repos/huggingface/datasets/issues/2704/events | https://github.com/huggingface/datasets/pull/2704 | 950,483,980 | MDExOlB1bGxSZXF1ZXN0Njk1MDIzMTEz | 2,704 | Fix pick default config name message | [] | closed | false | null | 0 | 2021-07-22T09:49:43Z | 2021-07-22T10:02:41Z | 2021-07-22T10:02:40Z | null | The error message to tell which config name to load is not displayed.
This is because in the code it was considering the config kwargs to be non-empty, which is a special case for custom configs created on the fly. It appears after this change: https://github.com/huggingface/datasets/pull/2659
I fixed that by making the config kwargs empty by default, even if default parameters are passed
Fix https://github.com/huggingface/datasets/issues/2703 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2704/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2704/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2704.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2704",
"merged_at": "2021-07-22T10:02:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2704.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2704"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3301/comments | https://api.github.com/repos/huggingface/datasets/issues/3301/events | https://github.com/huggingface/datasets/pull/3301 | 1,058,718,957 | PR_kwDODunzps4uyA9o | 3,301 | Add wikipedia tags | [] | closed | false | null | 0 | 2021-11-19T16:39:25Z | 2021-11-19T16:49:30Z | 2021-11-19T16:49:29Z | null | Add the missing tags to the wikipedia dataset card.
I also added the missing languages code in our language codes list.
This should also fix the code snippet that is presented on the Hub to load the dataset: fix https://github.com/huggingface/datasets/issues/3292 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3301/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3301/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3301.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3301",
"merged_at": "2021-11-19T16:49:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3301.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3301"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1462/comments | https://api.github.com/repos/huggingface/datasets/issues/1462/events | https://github.com/huggingface/datasets/pull/1462 | 761,489,274 | MDExOlB1bGxSZXF1ZXN0NTM2MTQ4Njc1 | 1,462 | Added conv ai 2 (Again) | [] | closed | false | null | 6 | 2020-12-10T18:21:55Z | 2020-12-13T00:21:32Z | 2020-12-13T00:21:31Z | null | The original PR -> https://github.com/huggingface/datasets/pull/1383
Reason for creating again -
The reason I had to create the PR again was due to the master rebasing issue. After rebasing the changes, all the previous commits got added to the branch. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1462/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1462/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1462.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1462",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1462.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1462"
} | true | [
"Looking perfect to me, need to rerun the tests\r\n",
"Thanks, @tanmoyio. \r\nHow do I rerun the tests? Should I change something or push a new commit?",
"@rkc007 you don't need to rerun it, @lhoestq @yjernite will rerun it, as there are huge number of PRs in the queue it might take lil bit of time. ",
"ive just re-run the tests",
"Thank you @abhishekkrthakur. Can you please rerun it again? It seems something was broken in CI during the previous test.",
"@lhoestq Sorry for the mess. I don't know why this keeps on happening. I tried step by step process of updating the PR but seems something is wrong. This happened for 2nd time with the same PR. Apologies for that. \r\n\r\nNew PR -> https://github.com/huggingface/datasets/pull/1527\r\nAlso, I fixed everything in the new PR."
] |
https://api.github.com/repos/huggingface/datasets/issues/2422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2422/comments | https://api.github.com/repos/huggingface/datasets/issues/2422/events | https://github.com/huggingface/datasets/pull/2422 | 905,568,548 | MDExOlB1bGxSZXF1ZXN0NjU2NjM3MzY1 | 2,422 | Fix save_to_disk nested features order in dataset_info.json | [] | closed | false | null | 0 | 2021-05-28T15:03:28Z | 2021-05-28T15:26:57Z | 2021-05-28T15:26:56Z | null | Fix issue https://github.com/huggingface/datasets/issues/2267
The order of the nested features matters (pyarrow limitation), but the save_to_disk method was saving the features types as JSON with `sort_keys=True`, which was breaking the order of the nested features. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2422/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2422/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2422.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2422",
"merged_at": "2021-05-28T15:26:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2422.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2422"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4301/comments | https://api.github.com/repos/huggingface/datasets/issues/4301/events | https://github.com/huggingface/datasets/pull/4301 | 1,230,401,256 | PR_kwDODunzps43idlE | 4,301 | Add ImageNet-Sketch dataset | [] | closed | false | null | 2 | 2022-05-09T23:38:45Z | 2022-05-23T18:14:14Z | 2022-05-23T18:05:29Z | null | This PR adds the ImageNet-Sketch dataset and resolves #3953 . | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4301/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4301/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4301.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4301",
"merged_at": "2022-05-23T18:05:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4301.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4301"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I think you can go ahead with uploading the data, and also ping the author in parallel. I think the images may subject to copyright anyway (scrapped from google image) so the dataset author is not allowed to set a license to the data.\r\n\r\nI think it's fine to upload the dataset as soon as we mention explicitly that the images may be subject to copyright."
] |
https://api.github.com/repos/huggingface/datasets/issues/5222 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5222/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5222/comments | https://api.github.com/repos/huggingface/datasets/issues/5222/events | https://github.com/huggingface/datasets/issues/5222 | 1,442,412,507 | I_kwDODunzps5V-Xfb | 5,222 | HuggingFace website is incorrectly reporting that my datasets are pickled | [] | closed | false | null | 4 | 2022-11-09T16:41:16Z | 2022-11-09T18:10:46Z | 2022-11-09T18:06:57Z | null | ### Describe the bug
HuggingFace is incorrectly reporting that my datasets are pickled. They are not picked, they are simple ZIP files containing PNG images.
Hopefully this is the right location to report this bug.
### Steps to reproduce the bug
Inspect my dataset respository here: https://huggingface.co/datasets/ProGamerGov/StableDiffusion-v1-5-Regularization-Images
### Expected behavior
They should not be reported as being pickled.
### Environment info
N/A | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5222/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5222/timeline | null | completed | null | null | false | [
"cc @McPatate maybe you know what's happening ?",
"Yes I think I know what is happening. We check in zips for pickles, and the UI must display the pickle jar when a scan has an associated list of imports, even when empty.\r\n~I'll fix ASAP !~",
"> I'll fix ASAP !\r\n\r\nActually I'd rather leave it like that for now, as it indicates that we checked for pickles and nothing dangerous appeared :)",
"Closing the issue with the typical \"feature not a bug\" "
] |
https://api.github.com/repos/huggingface/datasets/issues/6051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6051/comments | https://api.github.com/repos/huggingface/datasets/issues/6051/events | https://github.com/huggingface/datasets/issues/6051 | 1,811,549,650 | I_kwDODunzps5r-g3S | 6,051 | Skipping shard in the remote repo and resume upload | [] | closed | false | null | 2 | 2023-07-19T09:25:26Z | 2023-07-20T18:16:01Z | 2023-07-20T18:16:00Z | null | ### Describe the bug
For some reason when I try to resume the upload of my dataset, it is very slow to reach the index of the shard from which to resume the uploading.
From my understanding, the problem is in this part of the code:
arrow_dataset.py
```python
for index, shard in logging.tqdm(
enumerate(itertools.chain([first_shard], shards_iter)),
desc="Pushing dataset shards to the dataset hub",
total=num_shards,
disable=not logging.is_progress_bar_enabled(),
):
shard_path_in_repo = path_in_repo(index, shard)
# Upload a shard only if it doesn't already exist in the repository
if shard_path_in_repo not in data_files:
```
In particular, iterating the generator is slow during the call:
```python
self._select_contiguous(start, length, new_fingerprint=new_fingerprint)
```
I wonder if it is possible to avoid calling this function for shards that are already uploaded and just start from the correct shard index.
### Steps to reproduce the bug
1. Start the upload
```python
dataset = load_dataset("imagefolder", data_dir=DATA_DIR, split="train", drop_labels=True)
dataset.push_to_hub("repo/name")
```
2. Stop and restart the upload after hundreds of shards
### Expected behavior
Skip the uploaded shards faster.
### Environment info
- `datasets` version: 2.5.1
- Platform: Linux-4.18.0-193.el8.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- PyArrow version: 12.0.1
- Pandas version: 2.0.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6051/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6051/timeline | null | completed | null | null | false | [
"Hi! `_select_contiguous` fetches a (zero-copy) slice of the dataset's Arrow table to build a shard, so I don't think this part is the problem. To me, the issue seems to be the step where we embed external image files' bytes (a lot of file reads). You can use `.map` with multiprocessing to perform this step before `push_to_hub` in a faster manner and cache it to disk:\r\n```python\r\nfrom datasets.table import embed_table_storage\r\n# load_dataset(...)\r\nformat = dataset.format\r\ndataset = dataset.with_format(\"arrow\")\r\ndataset = dataset.map(embed_table_storage, batched=True)\r\ndataset = dataset.with_format(**format)\r\n# push_to_hub(...)\r\n```\r\n\r\n(In Datasets 3.0, these external bytes will be written to an Arrow file when generating a dataset to avoid this \"embed\" step)",
"Hi, thanks, this solution saves some time.\r\nBut can't we avoid embedding all external image files bytes with each push, skipping the images that have already been pushed into the repo?\r\n\r\nEdit: Ok I missed the part of cache it manually on the disk the first time, this solves the problem. Thank you"
] |
https://api.github.com/repos/huggingface/datasets/issues/3128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3128/comments | https://api.github.com/repos/huggingface/datasets/issues/3128/events | https://github.com/huggingface/datasets/issues/3128 | 1,032,201,870 | I_kwDODunzps49hiaO | 3,128 | Support Audio feature for TAR archives in sequential access | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2021-10-21T08:23:01Z | 2021-11-17T17:42:07Z | 2021-11-17T17:42:07Z | null | Currently, Audio feature accesses each audio file by their file path.
However, streamed TAR archive files do not allow random access to their archived files.
Therefore, we should enhance the Audio feature to support TAR archived files in sequential access. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3128/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3128/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5790/comments | https://api.github.com/repos/huggingface/datasets/issues/5790/events | https://github.com/huggingface/datasets/pull/5790 | 1,683,229,126 | PR_kwDODunzps5PG0mJ | 5,790 | Allow to run CI on push to ci-branch | [] | closed | false | null | 2 | 2023-04-25T13:57:26Z | 2023-04-26T13:43:08Z | 2023-04-26T13:35:47Z | null | This PR allows to run the CI on push to a branch named "ci-*", without needing to open a PR.
- This will allow to make CI tests without opening a PR, e.g., for future `huggingface-hub` releases, future dependency releases (like `fsspec`, `pandas`,...)
Note that to build the documentation, we already allow it on push to a branch named "doc-builder*".
See:
- #5788
CC: @Wauplin | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5790/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5790/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5790.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5790",
"merged_at": "2023-04-26T13:35:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5790.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5790"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007852 / 0.011353 (-0.003500) | 0.005804 / 0.011008 (-0.005204) | 0.098268 / 0.038508 (0.059760) | 0.036440 / 0.023109 (0.013331) | 0.299952 / 0.275898 (0.024054) | 0.335590 / 0.323480 (0.012111) | 0.006332 / 0.007986 (-0.001653) | 0.004218 / 0.004328 (-0.000110) | 0.074733 / 0.004250 (0.070483) | 0.055252 / 0.037052 (0.018200) | 0.300854 / 0.258489 (0.042365) | 0.353442 / 0.293841 (0.059601) | 0.036447 / 0.128546 (-0.092099) | 0.012638 / 0.075646 (-0.063009) | 0.336680 / 0.419271 (-0.082591) | 0.052436 / 0.043533 (0.008903) | 0.292606 / 0.255139 (0.037467) | 0.319676 / 0.283200 (0.036476) | 0.111137 / 0.141683 (-0.030546) | 1.449569 / 1.452155 (-0.002586) | 1.558110 / 1.492716 (0.065394) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.306043 / 0.018006 (0.288037) | 0.563174 / 0.000490 (0.562684) | 0.032227 / 0.000200 (0.032027) | 0.000491 / 0.000054 (0.000436) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029874 / 0.037411 (-0.007537) | 0.109330 / 0.014526 (0.094805) | 0.122579 / 0.176557 (-0.053978) | 0.181398 / 0.737135 (-0.555737) | 0.127124 / 0.296338 (-0.169215) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417950 / 0.215209 (0.202741) | 4.163883 / 2.077655 (2.086228) | 1.985209 / 1.504120 (0.481089) | 1.793660 / 1.541195 (0.252465) | 1.895193 / 1.468490 (0.426703) | 0.694331 / 4.584777 (-3.890446) | 3.820170 / 3.745712 (0.074458) | 2.180556 / 5.269862 (-3.089305) | 1.490671 / 4.565676 (-3.075006) | 0.086132 / 0.424275 (-0.338143) | 0.012289 / 0.007607 (0.004682) | 0.511182 / 0.226044 (0.285137) | 5.117855 / 2.268929 (2.848927) | 2.403914 / 55.444624 (-53.040710) | 2.071107 / 6.876477 (-4.805369) | 2.184108 / 2.142072 (0.042036) | 0.835028 / 4.805227 (-3.970199) | 0.167707 / 6.500664 (-6.332957) | 0.066724 / 0.075469 (-0.008746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203921 / 1.841788 (-0.637867) | 15.214676 / 8.074308 (7.140368) | 14.971337 / 10.191392 (4.779945) | 0.170225 / 0.680424 (-0.510199) | 0.017924 / 0.534201 (-0.516277) | 0.428532 / 0.579283 (-0.150751) | 0.449157 / 0.434364 (0.014793) | 0.507723 / 0.540337 (-0.032614) | 0.615331 / 1.386936 (-0.771605) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008172 / 0.011353 (-0.003181) | 0.005405 / 0.011008 (-0.005603) | 0.074684 / 0.038508 (0.036176) | 0.039133 / 0.023109 (0.016024) | 0.342598 / 0.275898 (0.066700) | 0.377752 / 0.323480 (0.054272) | 0.006655 / 0.007986 (-0.001331) | 0.005788 / 0.004328 (0.001459) | 0.074014 / 0.004250 (0.069763) | 0.056225 / 0.037052 (0.019173) | 0.342330 / 0.258489 (0.083841) | 0.381052 / 0.293841 (0.087211) | 0.036574 / 0.128546 (-0.091973) | 0.012472 / 0.075646 (-0.063174) | 0.087574 / 0.419271 (-0.331698) | 0.050178 / 0.043533 (0.006646) | 0.351116 / 0.255139 (0.095977) | 0.363772 / 0.283200 (0.080572) | 0.118313 / 0.141683 (-0.023370) | 1.436691 / 1.452155 (-0.015463) | 1.551397 / 1.492716 (0.058680) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265201 / 0.018006 (0.247195) | 0.561855 / 0.000490 (0.561366) | 0.000463 / 0.000200 (0.000263) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030540 / 0.037411 (-0.006871) | 0.118815 / 0.014526 (0.104289) | 0.127689 / 0.176557 (-0.048868) | 0.176211 / 0.737135 (-0.560924) | 0.133130 / 0.296338 (-0.163208) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416318 / 0.215209 (0.201109) | 4.146806 / 2.077655 (2.069151) | 1.983437 / 1.504120 (0.479317) | 1.799733 / 1.541195 (0.258539) | 1.889026 / 1.468490 (0.420536) | 0.723330 / 4.584777 (-3.861447) | 3.817795 / 3.745712 (0.072083) | 2.158449 / 5.269862 (-3.111413) | 1.377348 / 4.565676 (-3.188328) | 0.088504 / 0.424275 (-0.335771) | 0.012560 / 0.007607 (0.004953) | 0.530382 / 0.226044 (0.304337) | 5.308529 / 2.268929 (3.039600) | 2.469655 / 55.444624 (-52.974970) | 2.136209 / 6.876477 (-4.740267) | 2.322997 / 2.142072 (0.180924) | 0.861396 / 4.805227 (-3.943831) | 0.172747 / 6.500664 (-6.327917) | 0.067617 / 0.075469 (-0.007852) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263225 / 1.841788 (-0.578563) | 15.878025 / 8.074308 (7.803717) | 14.815627 / 10.191392 (4.624235) | 0.148722 / 0.680424 (-0.531702) | 0.018071 / 0.534201 (-0.516130) | 0.428389 / 0.579283 (-0.150894) | 0.428635 / 0.434364 (-0.005729) | 0.496953 / 0.540337 (-0.043385) | 0.592783 / 1.386936 (-0.794153) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2440 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2440/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2440/comments | https://api.github.com/repos/huggingface/datasets/issues/2440/events | https://github.com/huggingface/datasets/issues/2440 | 908,521,954 | MDU6SXNzdWU5MDg1MjE5NTQ= | 2,440 | Remove `extended` field from dataset tagger | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2021-06-01T17:18:42Z | 2021-06-09T09:06:31Z | 2021-06-09T09:06:30Z | null | ## Describe the bug
While working on #2435 I used the [dataset tagger](https://huggingface.co/datasets/tagging/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included:
```
dataset_name = 'arcd'
@pytest.mark.parametrize("dataset_name", get_changed_datasets(repo_path))
def test_changed_dataset_card(dataset_name):
card_path = repo_path / "datasets" / dataset_name / "README.md"
assert card_path.exists()
error_messages = []
try:
ReadMe.from_readme(card_path)
except Exception as readme_error:
error_messages.append(f"The following issues have been found in the dataset cards:\nREADME:\n{readme_error}")
try:
DatasetMetadata.from_readme(card_path)
except Exception as metadata_error:
error_messages.append(
f"The following issues have been found in the dataset cards:\nYAML tags:\n{metadata_error}"
)
if error_messages:
> raise ValueError("\n".join(error_messages))
E ValueError: The following issues have been found in the dataset cards:
E YAML tags:
E __init__() got an unexpected keyword argument 'extended'
tests/test_dataset_cards.py:70: ValueError
```
Consider either removing this tag from the tagger or including it as part of the validation step in the CI.
cc @yjernite | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2440/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2440/timeline | null | completed | null | null | false | [
"The tagger also doesn't insert the value for the `size_categories` field automatically, so this should be fixed too",
"Thanks for reporting. Indeed the `extended` tag doesn't exist. Not sure why we had that in the tagger.\r\nThe repo of the tagger is here if someone wants to give this a try: https://github.com/huggingface/datasets-tagging\r\nOtherwise I can probably fix it next week",
"I've opened a PR on `datasets-tagging` to fix the issue 🚀 ",
"thanks ! this is fixed now"
] |
https://api.github.com/repos/huggingface/datasets/issues/2936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2936/comments | https://api.github.com/repos/huggingface/datasets/issues/2936/events | https://github.com/huggingface/datasets/pull/2936 | 999,521,647 | PR_kwDODunzps4r5knb | 2,936 | Check that array is not Float as nan != nan | [] | closed | false | null | 0 | 2021-09-17T16:16:41Z | 2021-09-21T09:39:05Z | 2021-09-21T09:39:04Z | null | The Exception wants to check for issues with StructArrays/ListArrays but catches FloatArrays with value nan as nan != nan.
Pass on FloatArrays as we should not raise an Exception for them. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2936/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2936/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2936.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2936",
"merged_at": "2021-09-21T09:39:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2936.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2936"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5938/comments | https://api.github.com/repos/huggingface/datasets/issues/5938/events | https://github.com/huggingface/datasets/pull/5938 | 1,749,462,851 | PR_kwDODunzps5SmbkI | 5,938 | Make get_from_cache use custom temp filename that is locked | [] | closed | false | null | 2 | 2023-06-09T09:01:13Z | 2023-06-14T13:35:38Z | 2023-06-14T13:27:24Z | null | This PR ensures that the temporary filename created is the same as the one that is locked, while writing to the cache.
This PR stops using `tempfile` to generate the temporary filename.
Additionally, the behavior now is aligned for both `resume_download` `True` and `False`.
Refactor temp_file_manager so that it uses the filename that is locked:
- Use: `cache_path + ".incomplete"`, when the locked one is `cache_path + ".lock"`
Before it was using `tempfile` inside `cache_dir`, which was not locked: although very improbable name collision (8 random characters), this was not impossible when huge number of multiple processes.
Maybe related to "Stale file handle" issues caused by `tempfile`:
- [ ] https://huggingface.co/datasets/tapaco/discussions/4
- [ ] https://huggingface.co/datasets/xcsr/discussions/1
- [ ] https://huggingface.co/datasets/covost2/discussions/3
```
Error code: ConfigNamesError
Exception: OSError
Message: [Errno 116] Stale file handle
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 61, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1219, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1188, in dataset_module_factory
return HubDatasetModuleFactoryWithScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 907, in get_module
dataset_readme_path = self.download_dataset_readme_file()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 896, in download_dataset_readme_file
return cached_path(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 183, in cached_path
output_path = get_from_cache(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 611, in get_from_cache
http_get(
File "/usr/local/lib/python3.9/tempfile.py", line 496, in __exit__
result = self.file.__exit__(exc, value, tb)
OSError: [Errno 116] Stale file handle
```
- the stale file handle error can be raised when `tempfile` tries to close (when exiting its context manager) a filename that has been already closed by other process
- note that `tempfile` filenames are randomly generated but not locked in our code
CC: @severo | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5938/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5938/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5938.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5938",
"merged_at": "2023-06-14T13:27:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5938.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5938"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007241 / 0.011353 (-0.004112) | 0.004574 / 0.011008 (-0.006434) | 0.120481 / 0.038508 (0.081973) | 0.040492 / 0.023109 (0.017383) | 0.391399 / 0.275898 (0.115501) | 0.422844 / 0.323480 (0.099365) | 0.004441 / 0.007986 (-0.003545) | 0.004544 / 0.004328 (0.000216) | 0.089482 / 0.004250 (0.085231) | 0.052939 / 0.037052 (0.015887) | 0.393649 / 0.258489 (0.135160) | 0.433852 / 0.293841 (0.140011) | 0.035882 / 0.128546 (-0.092664) | 0.010172 / 0.075646 (-0.065474) | 0.410331 / 0.419271 (-0.008940) | 0.061481 / 0.043533 (0.017948) | 0.405066 / 0.255139 (0.149927) | 0.417732 / 0.283200 (0.134532) | 0.121647 / 0.141683 (-0.020035) | 1.790624 / 1.452155 (0.338469) | 1.863398 / 1.492716 (0.370681) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.250650 / 0.018006 (0.232644) | 0.489044 / 0.000490 (0.488554) | 0.010421 / 0.000200 (0.010222) | 0.000106 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030340 / 0.037411 (-0.007071) | 0.128318 / 0.014526 (0.113792) | 0.140463 / 0.176557 (-0.036093) | 0.205762 / 0.737135 (-0.531373) | 0.147996 / 0.296338 (-0.148342) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.493158 / 0.215209 (0.277949) | 4.858346 / 2.077655 (2.780691) | 2.242942 / 1.504120 (0.738822) | 2.010092 / 1.541195 (0.468897) | 2.076765 / 1.468490 (0.608275) | 0.636669 / 4.584777 (-3.948108) | 4.478027 / 3.745712 (0.732314) | 2.157843 / 5.269862 (-3.112019) | 1.305133 / 4.565676 (-3.260543) | 0.079220 / 0.424275 (-0.345055) | 0.013858 / 0.007607 (0.006251) | 0.604501 / 0.226044 (0.378457) | 5.950071 / 2.268929 (3.681143) | 2.738373 / 55.444624 (-52.706251) | 2.380275 / 6.876477 (-4.496201) | 2.517108 / 2.142072 (0.375035) | 0.772249 / 4.805227 (-4.032979) | 0.169874 / 6.500664 (-6.330790) | 0.078026 / 0.075469 (0.002557) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.450200 / 1.841788 (-0.391588) | 17.810965 / 8.074308 (9.736657) | 15.518998 / 10.191392 (5.327606) | 0.200469 / 0.680424 (-0.479954) | 0.020777 / 0.534201 (-0.513424) | 0.504556 / 0.579283 (-0.074727) | 0.518493 / 0.434364 (0.084129) | 0.615335 / 0.540337 (0.074998) | 0.754065 / 1.386936 (-0.632871) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007224 / 0.011353 (-0.004129) | 0.004663 / 0.011008 (-0.006345) | 0.092151 / 0.038508 (0.053643) | 0.038359 / 0.023109 (0.015250) | 0.486413 / 0.275898 (0.210515) | 0.521596 / 0.323480 (0.198116) | 0.004207 / 0.007986 (-0.003778) | 0.003745 / 0.004328 (-0.000583) | 0.089840 / 0.004250 (0.085589) | 0.050996 / 0.037052 (0.013943) | 0.498090 / 0.258489 (0.239601) | 0.533647 / 0.293841 (0.239806) | 0.035151 / 0.128546 (-0.093395) | 0.010293 / 0.075646 (-0.065354) | 0.099056 / 0.419271 (-0.320215) | 0.057365 / 0.043533 (0.013833) | 0.470652 / 0.255139 (0.215513) | 0.509801 / 0.283200 (0.226602) | 0.115650 / 0.141683 (-0.026033) | 1.810860 / 1.452155 (0.358705) | 1.896775 / 1.492716 (0.404059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261887 / 0.018006 (0.243880) | 0.489919 / 0.000490 (0.489430) | 0.006117 / 0.000200 (0.005917) | 0.000134 / 0.000054 (0.000079) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035033 / 0.037411 (-0.002378) | 0.141093 / 0.014526 (0.126567) | 0.152613 / 0.176557 (-0.023943) | 0.218351 / 0.737135 (-0.518785) | 0.158366 / 0.296338 (-0.137972) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.542219 / 0.215209 (0.327010) | 5.479358 / 2.077655 (3.401703) | 2.749586 / 1.504120 (1.245466) | 2.537686 / 1.541195 (0.996491) | 2.582351 / 1.468490 (1.113861) | 0.636750 / 4.584777 (-3.948027) | 4.537501 / 3.745712 (0.791789) | 2.141392 / 5.269862 (-3.128469) | 1.279711 / 4.565676 (-3.285965) | 0.079227 / 0.424275 (-0.345048) | 0.014141 / 0.007607 (0.006534) | 0.662070 / 0.226044 (0.436025) | 6.572144 / 2.268929 (4.303215) | 3.321349 / 55.444624 (-52.123275) | 2.928219 / 6.876477 (-3.948258) | 3.002732 / 2.142072 (0.860659) | 0.773808 / 4.805227 (-4.031419) | 0.166017 / 6.500664 (-6.334647) | 0.076424 / 0.075469 (0.000955) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.584325 / 1.841788 (-0.257463) | 18.359247 / 8.074308 (10.284938) | 16.977875 / 10.191392 (6.786483) | 0.195381 / 0.680424 (-0.485043) | 0.021048 / 0.534201 (-0.513153) | 0.512237 / 0.579283 (-0.067047) | 0.511435 / 0.434364 (0.077071) | 0.592856 / 0.540337 (0.052518) | 0.711905 / 1.386936 (-0.675031) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3494/comments | https://api.github.com/repos/huggingface/datasets/issues/3494/events | https://github.com/huggingface/datasets/pull/3494 | 1,089,983,103 | PR_kwDODunzps4wV0vB | 3,494 | Clone full repo to detect new tags when mirroring datasets on the Hub | [] | closed | false | null | 2 | 2021-12-28T15:50:47Z | 2021-12-28T16:07:21Z | 2021-12-28T16:07:20Z | null | The new releases of `datasets` were not detected because the shallow clone in the CI wasn't getting the git tags.
By cloning the full repository we can properly detect a new release, and tag all the dataset repositories accordingly
cc @SBrandeis | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3494/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3494/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3494.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3494",
"merged_at": "2021-12-28T16:07:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3494.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3494"
} | true | [
"Good catch !!",
"The CI fail is unrelated to this PR and fixed on master, merging :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/2164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2164/comments | https://api.github.com/repos/huggingface/datasets/issues/2164/events | https://github.com/huggingface/datasets/pull/2164 | 849,739,759 | MDExOlB1bGxSZXF1ZXN0NjA4NDQ0MTE3 | 2,164 | Replace assertTrue(isinstance with assertIsInstance in tests | [] | closed | false | null | 0 | 2021-04-03T21:07:02Z | 2021-04-06T14:41:09Z | 2021-04-06T14:41:08Z | null | Replaces all the occurrences of the `assertTrue(isinstance(` pattern with `assertIsInstance`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2164/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2164/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2164.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2164",
"merged_at": "2021-04-06T14:41:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2164.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2164"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/456 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/456/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/456/comments | https://api.github.com/repos/huggingface/datasets/issues/456/events | https://github.com/huggingface/datasets/pull/456 | 668,723,785 | MDExOlB1bGxSZXF1ZXN0NDU5MTc1MTY0 | 456 | add crd3(ACL 2020) dataset | [] | closed | false | null | 0 | 2020-07-30T13:28:35Z | 2020-08-03T11:28:52Z | 2020-08-03T11:28:52Z | null | This PR adds the **Critical Role Dungeons and Dragons Dataset** published at ACL 2020 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/456/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/456/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/456.diff",
"html_url": "https://github.com/huggingface/datasets/pull/456",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/456.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/456"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4119/comments | https://api.github.com/repos/huggingface/datasets/issues/4119/events | https://github.com/huggingface/datasets/pull/4119 | 1,195,641,298 | PR_kwDODunzps41yXHF | 4,119 | Hotfix failing CI tests on Windows | [] | closed | false | null | 1 | 2022-04-07T07:38:46Z | 2022-04-07T09:47:24Z | 2022-04-07T07:57:13Z | null | This PR makes a hotfix for our CI Windows tests: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414
Fix #4118
I guess this issue is related to this PR:
- huggingface/huggingface_hub#815 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4119/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4119/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4119.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4119",
"merged_at": "2022-04-07T07:57:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4119.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4119"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5159/comments | https://api.github.com/repos/huggingface/datasets/issues/5159/events | https://github.com/huggingface/datasets/pull/5159 | 1,422,172,080 | PR_kwDODunzps5BfBN9 | 5,159 | fsspec lock reset in multiprocessing | [] | closed | false | null | 1 | 2022-10-25T09:41:59Z | 2022-11-03T20:51:15Z | 2022-11-03T20:48:53Z | null | `fsspec` added a clean way of resetting its lock - instead of doing it manually | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5159/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5159/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5159.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5159",
"merged_at": "2022-11-03T20:48:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5159.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5159"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/473/comments | https://api.github.com/repos/huggingface/datasets/issues/473/events | https://github.com/huggingface/datasets/pull/473 | 672,007,247 | MDExOlB1bGxSZXF1ZXN0NDYyMTIwNzU4 | 473 | add DoQA dataset (ACL 2020) | [] | closed | false | null | 0 | 2020-08-03T11:26:52Z | 2020-09-10T17:19:11Z | 2020-09-03T11:44:15Z | null | add DoQA dataset (ACL 2020) http://ixa.eus/node/12931 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/473/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/473/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/473.diff",
"html_url": "https://github.com/huggingface/datasets/pull/473",
"merged_at": "2020-09-03T11:44:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/473.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/473"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6080/comments | https://api.github.com/repos/huggingface/datasets/issues/6080/events | https://github.com/huggingface/datasets/pull/6080 | 1,822,667,554 | PR_kwDODunzps5WdL4K | 6,080 | Remove README link to deprecated Colab notebook | [] | closed | false | null | 3 | 2023-07-26T15:27:49Z | 2023-07-26T16:24:43Z | 2023-07-26T16:14:34Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6080/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6080/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6080.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6080",
"merged_at": "2023-07-26T16:14:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6080.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6080"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006458 / 0.011353 (-0.004894) | 0.003895 / 0.011008 (-0.007114) | 0.084280 / 0.038508 (0.045772) | 0.071304 / 0.023109 (0.048195) | 0.313910 / 0.275898 (0.038012) | 0.344070 / 0.323480 (0.020590) | 0.005413 / 0.007986 (-0.002573) | 0.003308 / 0.004328 (-0.001021) | 0.064570 / 0.004250 (0.060320) | 0.056824 / 0.037052 (0.019771) | 0.321102 / 0.258489 (0.062613) | 0.355834 / 0.293841 (0.061993) | 0.031252 / 0.128546 (-0.097294) | 0.008427 / 0.075646 (-0.067219) | 0.287348 / 0.419271 (-0.131924) | 0.053261 / 0.043533 (0.009728) | 0.324892 / 0.255139 (0.069753) | 0.335847 / 0.283200 (0.052647) | 0.023453 / 0.141683 (-0.118230) | 1.485456 / 1.452155 (0.033301) | 1.531329 / 1.492716 (0.038612) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201924 / 0.018006 (0.183918) | 0.447188 / 0.000490 (0.446698) | 0.005543 / 0.000200 (0.005343) | 0.000086 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027586 / 0.037411 (-0.009825) | 0.082412 / 0.014526 (0.067886) | 0.094851 / 0.176557 (-0.081706) | 0.151331 / 0.737135 (-0.585804) | 0.094475 / 0.296338 (-0.201863) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399004 / 0.215209 (0.183795) | 3.974652 / 2.077655 (1.896997) | 1.991909 / 1.504120 (0.487789) | 1.811684 / 1.541195 (0.270489) | 1.869774 / 1.468490 (0.401283) | 0.487745 / 4.584777 (-4.097032) | 3.558945 / 3.745712 (-0.186768) | 5.530468 / 5.269862 (0.260606) | 3.293147 / 4.565676 (-1.272529) | 0.057531 / 0.424275 (-0.366744) | 0.007212 / 0.007607 (-0.000395) | 0.470325 / 0.226044 (0.244281) | 4.701652 / 2.268929 (2.432723) | 2.453020 / 55.444624 (-52.991605) | 2.110152 / 6.876477 (-4.766325) | 2.314669 / 2.142072 (0.172597) | 0.615039 / 4.805227 (-4.190189) | 0.133229 / 6.500664 (-6.367435) | 0.060821 / 0.075469 (-0.014648) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.296708 / 1.841788 (-0.545079) | 18.717251 / 8.074308 (10.642943) | 14.325305 / 10.191392 (4.133913) | 0.147680 / 0.680424 (-0.532744) | 0.018312 / 0.534201 (-0.515889) | 0.392766 / 0.579283 (-0.186517) | 0.403319 / 0.434364 (-0.031045) | 0.453696 / 0.540337 (-0.086641) | 0.622564 / 1.386936 (-0.764372) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006483 / 0.011353 (-0.004870) | 0.004018 / 0.011008 (-0.006991) | 0.064436 / 0.038508 (0.025928) | 0.072365 / 0.023109 (0.049256) | 0.387532 / 0.275898 (0.111634) | 0.418175 / 0.323480 (0.094695) | 0.005453 / 0.007986 (-0.002533) | 0.003368 / 0.004328 (-0.000961) | 0.064896 / 0.004250 (0.060645) | 0.057018 / 0.037052 (0.019966) | 0.406596 / 0.258489 (0.148107) | 0.431194 / 0.293841 (0.137353) | 0.031788 / 0.128546 (-0.096759) | 0.008532 / 0.075646 (-0.067114) | 0.070605 / 0.419271 (-0.348666) | 0.053317 / 0.043533 (0.009785) | 0.391930 / 0.255139 (0.136791) | 0.406071 / 0.283200 (0.122872) | 0.028652 / 0.141683 (-0.113030) | 1.487677 / 1.452155 (0.035522) | 1.546071 / 1.492716 (0.053355) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220063 / 0.018006 (0.202056) | 0.441111 / 0.000490 (0.440621) | 0.006066 / 0.000200 (0.005867) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035179 / 0.037411 (-0.002232) | 0.096745 / 0.014526 (0.082219) | 0.108171 / 0.176557 (-0.068386) | 0.164590 / 0.737135 (-0.572545) | 0.109425 / 0.296338 (-0.186913) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408101 / 0.215209 (0.192892) | 4.062961 / 2.077655 (1.985306) | 2.101849 / 1.504120 (0.597730) | 1.935919 / 1.541195 (0.394724) | 1.993749 / 1.468490 (0.525259) | 0.487788 / 4.584777 (-4.096989) | 3.533972 / 3.745712 (-0.211740) | 3.218448 / 5.269862 (-2.051414) | 2.002322 / 4.565676 (-2.563355) | 0.057371 / 0.424275 (-0.366904) | 0.007704 / 0.007607 (0.000097) | 0.491695 / 0.226044 (0.265650) | 4.905009 / 2.268929 (2.636080) | 2.597879 / 55.444624 (-52.846745) | 2.252086 / 6.876477 (-4.624391) | 2.434439 / 2.142072 (0.292367) | 0.583071 / 4.805227 (-4.222156) | 0.133765 / 6.500664 (-6.366899) | 0.061276 / 0.075469 (-0.014193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.403111 / 1.841788 (-0.438676) | 19.218886 / 8.074308 (11.144578) | 13.981775 / 10.191392 (3.790383) | 0.167784 / 0.680424 (-0.512640) | 0.018401 / 0.534201 (-0.515800) | 0.392038 / 0.579283 (-0.187245) | 0.414776 / 0.434364 (-0.019587) | 0.476221 / 0.540337 (-0.064117) | 0.632724 / 1.386936 (-0.754212) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007595 / 0.011353 (-0.003758) | 0.004540 / 0.011008 (-0.006468) | 0.099350 / 0.038508 (0.060842) | 0.087062 / 0.023109 (0.063953) | 0.415980 / 0.275898 (0.140082) | 0.466390 / 0.323480 (0.142910) | 0.005958 / 0.007986 (-0.002027) | 0.003671 / 0.004328 (-0.000657) | 0.075714 / 0.004250 (0.071463) | 0.066062 / 0.037052 (0.029010) | 0.426527 / 0.258489 (0.168038) | 0.473282 / 0.293841 (0.179441) | 0.035669 / 0.128546 (-0.092878) | 0.009729 / 0.075646 (-0.065918) | 0.344035 / 0.419271 (-0.075237) | 0.061153 / 0.043533 (0.017620) | 0.428607 / 0.255139 (0.173468) | 0.445951 / 0.283200 (0.162752) | 0.026373 / 0.141683 (-0.115310) | 1.788725 / 1.452155 (0.336570) | 1.871055 / 1.492716 (0.378339) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230606 / 0.018006 (0.212600) | 0.489835 / 0.000490 (0.489345) | 0.005669 / 0.000200 (0.005469) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032197 / 0.037411 (-0.005214) | 0.099571 / 0.014526 (0.085045) | 0.112686 / 0.176557 (-0.063871) | 0.179478 / 0.737135 (-0.557658) | 0.112670 / 0.296338 (-0.183668) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.449606 / 0.215209 (0.234397) | 4.503356 / 2.077655 (2.425701) | 2.190480 / 1.504120 (0.686361) | 1.986054 / 1.541195 (0.444860) | 2.071594 / 1.468490 (0.603104) | 0.566301 / 4.584777 (-4.018475) | 4.088460 / 3.745712 (0.342748) | 4.840100 / 5.269862 (-0.429761) | 2.857697 / 4.565676 (-1.707980) | 0.066718 / 0.424275 (-0.357557) | 0.008642 / 0.007607 (0.001034) | 0.539785 / 0.226044 (0.313740) | 5.383252 / 2.268929 (3.114323) | 2.878177 / 55.444624 (-52.566447) | 2.374577 / 6.876477 (-4.501899) | 2.590500 / 2.142072 (0.448428) | 0.675196 / 4.805227 (-4.130031) | 0.153544 / 6.500664 (-6.347120) | 0.070958 / 0.075469 (-0.004511) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.490403 / 1.841788 (-0.351385) | 22.085740 / 8.074308 (14.011432) | 16.588093 / 10.191392 (6.396701) | 0.188598 / 0.680424 (-0.491826) | 0.021567 / 0.534201 (-0.512634) | 0.472594 / 0.579283 (-0.106689) | 0.472903 / 0.434364 (0.038539) | 0.545305 / 0.540337 (0.004968) | 0.736399 / 1.386936 (-0.650537) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007635 / 0.011353 (-0.003718) | 0.004731 / 0.011008 (-0.006277) | 0.076482 / 0.038508 (0.037974) | 0.083666 / 0.023109 (0.060557) | 0.469596 / 0.275898 (0.193698) | 0.493068 / 0.323480 (0.169588) | 0.006014 / 0.007986 (-0.001971) | 0.003902 / 0.004328 (-0.000426) | 0.077142 / 0.004250 (0.072891) | 0.064355 / 0.037052 (0.027303) | 0.468859 / 0.258489 (0.210370) | 0.504002 / 0.293841 (0.210161) | 0.037606 / 0.128546 (-0.090940) | 0.010141 / 0.075646 (-0.065505) | 0.083790 / 0.419271 (-0.335482) | 0.060923 / 0.043533 (0.017390) | 0.464752 / 0.255139 (0.209613) | 0.500464 / 0.283200 (0.217264) | 0.031183 / 0.141683 (-0.110499) | 1.779294 / 1.452155 (0.327139) | 1.870848 / 1.492716 (0.378131) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246567 / 0.018006 (0.228560) | 0.477182 / 0.000490 (0.476693) | 0.000426 / 0.000200 (0.000226) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035377 / 0.037411 (-0.002034) | 0.106042 / 0.014526 (0.091516) | 0.119237 / 0.176557 (-0.057320) | 0.182145 / 0.737135 (-0.554991) | 0.119537 / 0.296338 (-0.176801) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.491352 / 0.215209 (0.276143) | 4.824220 / 2.077655 (2.746565) | 2.652039 / 1.504120 (1.147919) | 2.535310 / 1.541195 (0.994116) | 2.620009 / 1.468490 (1.151519) | 0.567865 / 4.584777 (-4.016912) | 4.158795 / 3.745712 (0.413082) | 6.042582 / 5.269862 (0.772721) | 3.957193 / 4.565676 (-0.608484) | 0.066647 / 0.424275 (-0.357628) | 0.008893 / 0.007607 (0.001285) | 0.570137 / 0.226044 (0.344093) | 5.687126 / 2.268929 (3.418198) | 3.137605 / 55.444624 (-52.307019) | 2.655979 / 6.876477 (-4.220498) | 2.893338 / 2.142072 (0.751265) | 0.698388 / 4.805227 (-4.106840) | 0.154897 / 6.500664 (-6.345767) | 0.071208 / 0.075469 (-0.004261) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.619346 / 1.841788 (-0.222441) | 22.782510 / 8.074308 (14.708202) | 16.317395 / 10.191392 (6.126003) | 0.197630 / 0.680424 (-0.482794) | 0.021795 / 0.534201 (-0.512406) | 0.466982 / 0.579283 (-0.112302) | 0.468609 / 0.434364 (0.034245) | 0.574380 / 0.540337 (0.034043) | 0.759827 / 1.386936 (-0.627109) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1424 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1424/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1424/comments | https://api.github.com/repos/huggingface/datasets/issues/1424/events | https://github.com/huggingface/datasets/pull/1424 | 760,724,914 | MDExOlB1bGxSZXF1ZXN0NTM1NTA4MjY5 | 1,424 | Add yoruba wordsim353 | [] | closed | false | null | 0 | 2020-12-09T22:37:42Z | 2020-12-09T22:39:45Z | 2020-12-09T22:39:45Z | null | Added WordSim-353 evaluation dataset for Yoruba | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1424/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1424/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1424.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1424",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1424.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1424"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2228 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2228/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2228/comments | https://api.github.com/repos/huggingface/datasets/issues/2228/events | https://github.com/huggingface/datasets/pull/2228 | 859,795,563 | MDExOlB1bGxSZXF1ZXN0NjE2ODE2MTQz | 2,228 | [WIP] Add ArrayXD support for fixed size list. | [] | open | false | null | 1 | 2021-04-16T13:04:08Z | 2022-07-06T15:19:48Z | null | null | Add support for fixed size list for ArrayXD when shape is known . See https://github.com/huggingface/datasets/issues/2146
Since offset are not stored anymore, the file size is now roughly equal to the actual data size. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2228/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2228/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2228.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2228",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2228.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2228"
} | true | [
"Awesome thanks ! To fix the CI you just need to merge master into your branch.\r\nThe error is unrelated to your PR"
] |
https://api.github.com/repos/huggingface/datasets/issues/4448 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4448/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4448/comments | https://api.github.com/repos/huggingface/datasets/issues/4448/events | https://github.com/huggingface/datasets/issues/4448 | 1,260,966,129 | I_kwDODunzps5LKNDx | 4,448 | New Preprocessing Feature - Deduplication [Request] | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 2 | 2022-06-05T05:32:56Z | 2023-03-08T17:38:37Z | null | null | **Is your feature request related to a problem? Please describe.**
Many large datasets are full of duplications and it has been shown that deduplicating datasets can lead to better performance while training, and more truthful evaluation at test-time.
A feature that allows one to easily deduplicate a dataset can be cool!
**Describe the solution you'd like**
We can define a function and keep only the first/last data-point that yields the value according to this function.
**Describe alternatives you've considered**
The clear alternative is to repeat a clear boilerplate every time someone want to deduplicate a dataset.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4448/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4448/timeline | null | null | null | null | false | [
"Hi! The [datasets_sql](https://github.com/mariosasko/datasets_sql) package lets you easily find distinct rows in a dataset (an example with `SELECT DISTINCT` is in the readme). Deduplication is (still) not part of the official API because it's hard to implement for datasets bigger than RAM while only using the native PyArrow ops.\r\n\r\n(Btw, this is a duplicate of https://github.com/huggingface/datasets/issues/2514)",
"Here is an example using the [datasets_sql](https://github.com/mariosasko/datasets_sql) mentioned \r\n\r\n```python \r\nfrom datasets_sql import query\r\n\r\ndataset = load_dataset(\"imdb\", split=\"train\")\r\n\r\n# If you dont have an id column just add one by enumerating\r\ndataset=dataset.map(lambda x,i: {\"id\":i}, with_indices=True)\r\n\r\nid_column='id'\r\nunique_column='text'\r\n\r\n# always selects min id\r\nunique_dataset = query(f\"SELECT dataset.* FROM dataset JOIN (SELECT MIN({id_column}) as unique_id FROM dataset group by {unique_column}) ON unique_id=dataset.{id_column}\")\r\n```\r\nNot ideal for large datasets but good enough for basic cases.\r\nSure would be nice to have in the library 🤗 "
] |
https://api.github.com/repos/huggingface/datasets/issues/2530 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2530/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2530/comments | https://api.github.com/repos/huggingface/datasets/issues/2530/events | https://github.com/huggingface/datasets/pull/2530 | 927,013,773 | MDExOlB1bGxSZXF1ZXN0Njc1MjMyNDk0 | 2,530 | Fixed label parsing in the ProductReviews dataset | [] | closed | false | null | 4 | 2021-06-22T09:12:45Z | 2021-06-22T12:55:20Z | 2021-06-22T12:52:40Z | null | Fixed issue with parsing dataset labels. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2530/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2530/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2530.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2530",
"merged_at": "2021-06-22T12:52:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2530.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2530"
} | true | [
"@lhoestq, can you please review this PR?\r\nWhat exactly is the problem in the test case? Should it matter?",
"Hi ! Thanks for fixing this :)\r\n\r\nThe CI fails for two reasons:\r\n- the `pretty_name` tag is missing in yaml tags in ./datasets/turkish_product_reviews/README.md. You can fix that by adding this in the yaml tags:\r\n```yaml\r\npretty_name: Turkish Product Reviews\r\n```\r\n- The test that runs the turkish_product_reviews.py file on the dummy_data.zip data returned 0 examples. Indeed it looks like you changed dummy_data.zip file and now it is an empty zip file. I think you can fix that by reverting your change to the dummy_data.zip file",
"> Hi ! Thanks for fixing this :)\r\n> \r\n> The CI fails for two reasons:\r\n> \r\n> * the `pretty_name` tag is missing in yaml tags in ./datasets/turkish_product_reviews/README.md. You can fix that by adding this in the yaml tags:\r\n> \r\n> \r\n> ```yaml\r\n> pretty_name: Turkish Product Reviews\r\n> ```\r\n> \r\n> * The test that runs the turkish_product_reviews.py file on the dummy_data.zip data returned 0 examples. Indeed it looks like you changed dummy_data.zip file and now it is an empty zip file. I think you can fix that by reverting your change to the dummy_data.zip file\r\n\r\nMany thanks for the quick feedback.\r\nI made the relevant fixes but still got the error :(",
"> Thanks !\r\n> The CI was failing because of the dataset card that was missing some sections. I fixed that.\r\n> \r\n> It's all good now\r\n\r\nSuper. Thanks for the support."
] |
https://api.github.com/repos/huggingface/datasets/issues/447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/447/comments | https://api.github.com/repos/huggingface/datasets/issues/447/events | https://github.com/huggingface/datasets/pull/447 | 666,842,115 | MDExOlB1bGxSZXF1ZXN0NDU3NjE2NDA0 | 447 | [BugFix] fix wrong import of DEFAULT_TOKENIZER | [] | closed | false | null | 0 | 2020-07-28T07:41:10Z | 2020-07-28T12:58:01Z | 2020-07-28T12:52:05Z | null | Fixed the path to `DEFAULT_TOKENIZER`
#445 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/447/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/447/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/447.diff",
"html_url": "https://github.com/huggingface/datasets/pull/447",
"merged_at": "2020-07-28T12:52:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/447.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/447"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2600 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2600/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2600/comments | https://api.github.com/repos/huggingface/datasets/issues/2600/events | https://github.com/huggingface/datasets/issues/2600 | 938,086,745 | MDU6SXNzdWU5MzgwODY3NDU= | 2,600 | Crash when using multiprocessing (`num_proc` > 1) on `filter` and all samples are discarded | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-07-06T16:53:25Z | 2021-07-07T12:50:31Z | 2021-07-07T12:50:31Z | null | ## Describe the bug
If `filter` is applied to a dataset using multiprocessing (`num_proc` > 1) and all sharded datasets are empty afterwards (due to all samples being discarded), the program crashes.
## Steps to reproduce the bug
```python
from datasets import Dataset
data = Dataset.from_dict({'id': [0,1]})
data.filter(lambda x: False, num_proc=2)
```
## Expected results
An empty table should be returned without crashing.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/user/venv/lib/python3.8/site-packages/datasets/fingerprint.py", line 397, in wrapper
out = func(self, *args, **kwargs)
File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2143, in filter
return self.map(
File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1738, in map
result = concatenate_datasets(transformed_shards)
File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3267, in concatenate_datasets
table = concat_tables(tables_to_concat, axis=axis)
File "/home/user/venv/lib/python3.8/site-packages/datasets/table.py", line 853, in concat_tables
return ConcatenationTable.from_tables(tables, axis=axis)
File "/home/user/venv/lib/python3.8/site-packages/datasets/table.py", line 713, in from_tables
blocks = to_blocks(tables[0])
IndexError: list index out of range
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Linux-5.12.11-300.fc34.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.10
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2600/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2600/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3674 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3674/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3674/comments | https://api.github.com/repos/huggingface/datasets/issues/3674/events | https://github.com/huggingface/datasets/pull/3674 | 1,123,027,874 | PR_kwDODunzps4yBe17 | 3,674 | Add FrugalScore metric | [] | closed | false | null | 5 | 2022-02-03T12:28:52Z | 2022-02-21T15:58:44Z | 2022-02-21T15:58:44Z | null | This pull request add FrugalScore metric for NLG systems evaluation.
FrugalScore is a reference-based metric for NLG models evaluation. It is based on a distillation approach that allows to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance.
Paper: https://arxiv.org/abs/2110.08559?context=cs
Github: https://github.com/moussaKam/FrugalScore
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3674/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3674/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3674.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3674",
"merged_at": "2022-02-21T15:58:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3674.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3674"
} | true | [
"@lhoestq \r\n\r\nThe model used by default (`moussaKam/frugalscore_tiny_bert-base_bert-score`) is a tiny model.\r\n\r\nI still want to make one modification before merging.\r\nI would like to load the model checkpoint once. Do you think it's a good idea if I load it in `_download_and_prepare`? In this case should the model name be the `self.config_name` or another variable say `self.model_name` ? ",
"OK, I added a commit that loads the checkpoint in `_download_and_prepare`. Please let me know if it looks good. ",
"@lhoestq is everything OK to merge? ",
"I triggered the CI and it's failing, can you merge the `master` branch into yours ? It should fix the issues.\r\n\r\nAlso the doctest apparently raises an error because it outputs `{'scores': [0.6307542, 0.6449357]}` instead of `{'scores': [0.631, 0.645]}` - feel free to edit the code example in the docstring to round the scores, that should fix it",
"@lhoestq hope it's OK now"
] |
https://api.github.com/repos/huggingface/datasets/issues/1674 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1674/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1674/comments | https://api.github.com/repos/huggingface/datasets/issues/1674/events | https://github.com/huggingface/datasets/issues/1674 | 777,321,840 | MDU6SXNzdWU3NzczMjE4NDA= | 1,674 | dutch_social can't be loaded | [] | closed | false | null | 8 | 2021-01-01T17:37:08Z | 2022-10-05T13:03:26Z | 2022-10-05T13:03:26Z | null | Hi all,
I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social).
However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links.
```
(base) Koens-MacBook-Pro:~ koenvandenberge$ python
Python 3.7.4 (default, Aug 13 2019, 15:17:50)
[Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
dataset = load_dataset(
'dutch_social')
>>> dataset = load_dataset(
... 'dutch_social')
Traceback (most recent call last):
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/dutch_social/dutch_social.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 278, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/dutch_social/dutch_social.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 282, in prepare_module
combined_path, github_file_path, file_path
FileNotFoundError: Couldn't find file locally at dutch_social/dutch_social.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/dutch_social/dutch_social.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/dutch_social/dutch_social.py
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1674/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1674/timeline | null | completed | null | null | false | [
"exactly the same issue in some other datasets.\r\nDid you find any solution??\r\n",
"Hi @koenvandenberge and @alighofrani95!\r\nThe datasets you're experiencing issues with were most likely added recently to the `datasets` library, meaning they have not been released yet. They will be released with the v2 of the library.\r\nMeanwhile, you can still load the datasets using one of the techniques described in this issue: #1641 \r\nLet me know if this helps!",
"Maybe we should do a small release on Monday in the meantime @lhoestq ?",
"Yes sure !",
"I just did the release :)\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `dutch_social` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"dutch_social\")\r\n```",
"@lhoestq could you also shed light on the Hindi Wikipedia Dataset for issue number #1673. Will this also be available in the new release that you committed recently?",
"The issue is different for this one, let me give more details in the issue",
"Okay. Could you comment on the #1673 thread? Actually @thomwolf had commented that if i use datasets library from source, it would allow me to download the Hindi Wikipedia Dataset but even the version 1.1.3 gave me the same issue. The details are there in the issue #1673 thread."
] |
https://api.github.com/repos/huggingface/datasets/issues/4572 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4572/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4572/comments | https://api.github.com/repos/huggingface/datasets/issues/4572/events | https://github.com/huggingface/datasets/issues/4572 | 1,285,022,499 | I_kwDODunzps5Ml-Mj | 4,572 | Dataset Viewer issue for mlsum | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 1 | 2022-06-26T20:24:17Z | 2022-07-21T12:40:01Z | 2022-07-21T12:40:01Z | null | ### Link
https://huggingface.co/datasets/mlsum/viewer/de/train
### Description
There's seems to be a problem with the download / streaming of this dataset:
```
Server error
Status code: 400
Exception: BadZipFile
Message: File is not a zip file
```
### Owner
No | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4572/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4572/timeline | null | completed | null | null | false | [
"Thanks for reporting, @lewtun.\r\n\r\nAfter investigation, it seems that the server https://gitlab.lip6.fr does not allow HTTP Range requests.\r\n\r\nWe are trying to find a workaround..."
] |
https://api.github.com/repos/huggingface/datasets/issues/2698 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2698/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2698/comments | https://api.github.com/repos/huggingface/datasets/issues/2698/events | https://github.com/huggingface/datasets/pull/2698 | 950,159,867 | MDExOlB1bGxSZXF1ZXN0Njk0NzUxMzMw | 2,698 | Ignore empty batch when writing | [] | closed | false | null | 0 | 2021-07-21T22:35:30Z | 2021-07-26T14:56:03Z | 2021-07-26T13:25:26Z | null | This prevents an schema update with unknown column types, as reported in #2644.
This is my first attempt at fixing the issue. I tested the following:
- First batch returned by a batched map operation is empty.
- An intermediate batch is empty.
- `python -m unittest tests.test_arrow_writer` passes.
However, `arrow_writer` looks like a pretty generic interface, I'm not sure if there are other uses I may have overlooked. Let me know if that's the case, or if a better approach would be preferable. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2698/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2698/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2698.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2698",
"merged_at": "2021-07-26T13:25:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2698.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2698"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1828/comments | https://api.github.com/repos/huggingface/datasets/issues/1828/events | https://github.com/huggingface/datasets/pull/1828 | 802,449,234 | MDExOlB1bGxSZXF1ZXN0NTY4NTkwNDM2 | 1,828 | Add CelebA Dataset | [] | closed | false | null | 9 | 2021-02-05T20:20:55Z | 2021-02-18T14:17:07Z | 2021-02-18T14:17:07Z | null | Trying to add CelebA Dataset.
Need help with testing. Loading examples takes a lot of time so I am unable to generate the `dataset_infos.json` and unable to test. Also, need help with creating `dummy_data.zip`.
Additionally, trying to load a few examples using `load_dataset('./datasets/celeb_a',split='train[10:20]')` still loads all the examples (doesn't stop at 10). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1828/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1828/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/1828.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1828",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1828.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1828"
} | true | [
"Hi @gchhablani! Thanks for all the contributions! We definitely want more image datasets, but Face datasets are tricky in general, in this one includes predicting attributes such as Attractiveness, Gender, or Race, which can be pretty problematic.\r\n\r\nWould you be up for starting with only object classification or object detection datasets instead? (Your CIFAR-100 contribution will be super useful for example!)",
"Hi @yjernite, You're welcome. I am enjoying adding new datasets :)\r\nBy \"pretty problematic\", are you referring to the ethical issues? I used TFDS's [CelebA](https://github.com/tensorflow/datasets/blob/5ef7861470896acb6f74dacba85036001e4f1b8c/tensorflow_datasets/image/celeba.py#L91) as a reference. Here they mention in a \"Note\" that CelebA \"may contain potential bias\". Can we not do the same? I skipped the note for now, and we can add it. However, if you feel this isn't the right time, then I won't pursue this further. \r\n\r\nBut, can this issue be handled at a later stage? Does this also apply for my Hateful Memes Issue #1810?\r\n\r\nAlso, how can I \r\n1. load a part of the dataset? since `load_dataset(<>,split='train[10:20]')` still loads all the examples.\r\n2. make `datasets_infos.json` for huge datasets which have a single configuration?\r\n\r\nI will ofcourse be looking for other datasets to add regardless. \r\n",
"It's definitely a thorny question. The short answer is: Hateful Memes and hate speech detection datasets are different since their use case is specifically to train systems to identify and hopefully remove hateful content, whereas the purpose of a dataset that has an Attractiveness score as output is implicitly to train more models to rate \"Attractiveness\". \r\n\r\nAs far as warning about the \"potential biases\", I do not think it is quite enough, especially because it is hard to guarantee that every potential user will read the documentation (it is also an insufficient warning.)\r\n\r\nNote that we do have higher standards for the dataset cards of hate speech and hateful memes datasets, so if you do choose to add that one yourself we will ask that you summarize the relevant literature in the Social Impact section.\r\n\r\nIf you really need to add this dataset for your own research for the explicit purpose of studying these biases, you can add it as a community provided dataset following https://huggingface.co/docs/datasets/master/share_dataset.html#sharing-a-community-provided-dataset but I'd recommend just skipping it for now.",
"So currently you do need to download the whole dataset when using it, we are working on making it easier to stream parts of it from a remote host. You can also use the filesystem integration if local storage is an issue:\r\nhttps://huggingface.co/docs/datasets/master/filesystems.html\r\n",
"I don't think we have a great solution for `dataset_infos.json` with a single very large config when storage space is an issue, but it should be solved by the same upcoming feature mentioned above",
"Okay, then I won't pursue this one further. I'll keep this branch on my repository just in case the possibility of adding this dataset comes up in the future.\r\n\r\n> So currently you do need to download the whole dataset when using it, we are working on making it easier to stream parts of it from a remote host. You can also use the filesystem integration if local storage is an issue:\r\n> https://huggingface.co/docs/datasets/master/filesystems.html\r\n\r\nAfter downloading the whole dataset (around 1.4GB), it still loads all the examples despite using `split='train[:10%]'` or `split='train[10:20]'`. \r\n\r\nEDIT: I think this would happen only when the examples are generated for the first time and saved to the cache. Streaming parts of the data from a remote host sounds amazing! But, would that also allow for streaming examples of the data from the local cache? (without saving all the examples the first time).\r\n\r\nWhat I used:\r\n`d = load_dataset('./datasets/celeb_a',split='train[:10]')`\r\nOutput:\r\n`570 examples [01:33, 6.25 examples/s]` and it keeps going. \r\n\r\nEDIT 2: After a few thousand images, I get the following error:\r\n```python\r\nOSError: [Errno 24] Too many open files: '~/.cache/huggingface/datasets/celeb_a/default/1.1.0/01f9dca66039ab7c40b91b09af47a5fa8c3e49dc8d55df50da55b14116229207.incomplete'\r\n```\r\nI understand this is because of the way I load the images :\r\n```python\r\nImage.open(<path>)\r\n```\r\nWhat could be better alternative? I am only asking in case I face the same issues in the future.",
"Just some addition about loading only a subset of the data:\r\nCurrently if even you specify `split='train[:10]'`, it downloads and generate the full dataset, so that you can pick another part afterward if you want to. We may change that in the future and use streaming.\r\n\r\nAnd about your open files issue, you can try to close each image file after reading its content.",
"Hi @lhoestq,\r\nThanks for your response.\r\n\r\nI used `gc.collect()` inside the loop and that worked for me. I think since we are using a generator, and if I have something like `train[100000:100002]`, we will need to generate the first 1000001 examples and store. Ofcourse, this feature isn't a necessity right now, I suppose.",
"Closing this PR."
] |
https://api.github.com/repos/huggingface/datasets/issues/2886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2886/comments | https://api.github.com/repos/huggingface/datasets/issues/2886/events | https://github.com/huggingface/datasets/issues/2886 | 992,534,632 | MDU6SXNzdWU5OTI1MzQ2MzI= | 2,886 | Hj | [] | closed | false | null | 0 | 2021-09-09T18:58:52Z | 2021-09-10T11:46:29Z | 2021-09-10T11:46:29Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2886/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2886/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1223 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1223/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1223/comments | https://api.github.com/repos/huggingface/datasets/issues/1223/events | https://github.com/huggingface/datasets/pull/1223 | 758,022,208 | MDExOlB1bGxSZXF1ZXN0NTMzMjY2MDc4 | 1,223 | 🇸🇪 Added Swedish Reviews dataset for sentiment classification in Sw… | [] | closed | false | null | 0 | 2020-12-06T21:02:54Z | 2020-12-08T10:54:56Z | 2020-12-08T10:54:56Z | null | perhaps: @lhoestq 🤗 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1223/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1223/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1223.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1223",
"merged_at": "2020-12-08T10:54:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1223.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1223"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3505/comments | https://api.github.com/repos/huggingface/datasets/issues/3505/events | https://github.com/huggingface/datasets/issues/3505 | 1,091,150,820 | I_kwDODunzps5BCaPk | 3,505 | cast_column function not working with map function in streaming mode for Audio features | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-12-30T14:52:01Z | 2022-01-18T19:54:07Z | 2022-01-18T19:54:07Z | null | ## Describe the bug
I am trying to use Audio class for loading audio features using custom dataset. I am able to cast 'audio' feature into 'Audio' format with cast_column function. On using map function, I am not getting 'Audio' casted feature but getting path of audio file only.
I am getting features of 'audio' of string type with load_dataset call. After using cast_column 'audio' feature is converted into 'Audio' type. But in map function I am not able to get Audio type for audio feature & getting string type data containing path of file only. So I am not able to use processor in encode function.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset, Audio
from transformers import Wav2Vec2Processor
def encode(batch, processor):
print("Audio: ",batch['audio'])
batch["input_values"] = processor(batch["audio"]['array'], sampling_rate=16000).input_values
return batch
def print_ds(ds):
iterator = iter(ds)
for d in iterator:
print("Data: ",d)
break
processor = Wav2Vec2Processor.from_pretrained(pretrained_model_path)
dataset = load_dataset("custom_dataset.py","train",data_files={'train':'train_path.txt'},
data_dir="data", streaming=True, split="train")
print("Features: ",dataset.features)
print_ds(dataset)
dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000))
print("Features: ",dataset.features)
print_ds(dataset)
dataset = dataset.map(lambda x: encode(x,processor))
print("Features: ",dataset.features)
print_ds(dataset)
```
## Expected results
map function not printing Audio type features be used with processor function and getting error in processor call due to this.
## Actual results
# after load_dataset call
Features: {'sentence': Value(dtype='string', id=None), 'audio': Value(dtype='string', id=None)}
Data: {'sentence': 'और अपने पेट को माँ की स्वादिष्ट गरमगरम जलेबियाँ हड़पते\n', 'audio': 'data/0116_003.wav'}
# after cast_column call
Features: {'sentence': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None)}
Data: {'sentence': 'और अपने पेट को माँ की स्वादिष्ट गरमगरम जलेबियाँ हड़पते\n', 'audio': {'path': 'data/0116_003.wav', 'array': array([ 1.2662281e-06, 1.0264218e-06, -1.3615092e-06, ...,
1.3017889e-02, 1.0085563e-02, 4.8155054e-03], dtype=float32), 'sampling_rate': 16000}}
# after map call
Features: None
Audio: data/0116_003.wav
Traceback (most recent call last):
File "demo2.py", line 36, in <module>
print_ds(dataset)
File "demo2.py", line 11, in print_ds
for d in iterator:
File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 341, in __iter__
for key, example in self._iter():
File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 338, in _iter
yield from ex_iterable
File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 192, in __iter__
yield key, self.function(example)
File "demo2.py", line 32, in <lambda>
dataset = dataset.map(lambda x: batch_encode(x,processor))
File "demo2.py", line 6, in batch_encode
batch["input_values"] = processor(batch["audio"]['array'], sampling_rate=16000).input_values
TypeError: string indices must be integers
## Environment info
- `datasets` version: 1.17.0
- Platform: Linux-4.14.243 with-debian-bullseye-sid
- Python version: 3.7.9
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3505/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3505/timeline | null | completed | null | null | false | [
"Hi! This is probably due to the fact that `IterableDataset.map` sets `features` to `None` before mapping examples. We can fix the issue by passing the old `features` dict to the map generator and performing encoding/decoding there (before calling the map transform function)."
] |
https://api.github.com/repos/huggingface/datasets/issues/3519 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3519/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3519/comments | https://api.github.com/repos/huggingface/datasets/issues/3519/events | https://github.com/huggingface/datasets/pull/3519 | 1,093,655,205 | PR_kwDODunzps4whnXH | 3,519 | CC100: Using HTTPS for the data source URL fixes load_dataset() | [] | closed | false | null | 0 | 2022-01-04T18:45:54Z | 2022-01-05T17:28:34Z | 2022-01-05T17:28:34Z | null | Without this change the following script (with any lang parameter) consistently fails. After changing to the HTTPS URL, the script works as expected.
```python
from datasets import load_dataset
dataset = load_dataset("cc100", lang="en")
```
This is the error produced by the previous script:
```sh
Using custom data configuration en-lang=en
Downloading and preparing dataset cc100/en to /home/antti/.cache/huggingface/datasets/cc100/en-lang=en/0.0.0/526ac20780de5e074cf73a7466e868cb67f960b48f6de42ff6a6c4e71910d71b...
Traceback (most recent call last):
File "/home/antti/tmp/cc100/cc100.py", line 3, in <module>
dataset = load_dataset("cc100", lang="en")
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/load.py", line 1694, in load_dataset
builder_instance.download_and_prepare(
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/builder.py", line 595, in download_and_prepare
self._download_and_prepare(
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/builder.py", line 661, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/antti/.cache/huggingface/modules/datasets_modules/datasets/cc100/526ac20780de5e074cf73a7466e868cb67f960b48f6de42ff6a6c4e71910d71b/cc100.py", line 117, in _split_generators
path = dl_manager.download_and_extract(download_url)
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 308, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 251, in map_nested
return function(data_struct)
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 298, in cached_path
output_path = get_from_cache(
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 617, in get_from_cache
raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})")
ConnectionError: Couldn't reach http://data.statmt.org/cc-100/en.txt.xz (error 503)
```
Note that I get the same behavior also using curl on the command line. The plain HTTP "curl -L http://data.statmt.org/cc-100/en.txt.xz" fails with "503 Service unavailable", but the with the HTTPS version of the URL curl starts downloading the file.
My guess is that the server does overly aggressive rate-limitting. When a client requests an HTTP URL, it (sensibly) gets redirected to the HTTPS equivalent, but now the server notices two requests coming from the same client (the original HTTP and the redirected HTTPS) during a brief time windows and rate-limitter kicks in and blocks the second request! If the client initally uses the HTTPS URL there's only one incoming request which the rate-limitter allows. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3519/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3519/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3519.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3519",
"merged_at": "2022-01-05T17:28:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3519.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3519"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4826 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4826/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4826/comments | https://api.github.com/repos/huggingface/datasets/issues/4826/events | https://github.com/huggingface/datasets/pull/4826 | 1,335,987,583 | PR_kwDODunzps49B0V3 | 4,826 | Fix language tags in dataset cards | [] | closed | false | null | 2 | 2022-08-11T13:47:14Z | 2022-08-11T14:17:48Z | 2022-08-11T14:03:12Z | null | Fix language tags in all dataset cards, so that they are validated (aligned with our `languages.json` resource). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4826/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4826/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4826.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4826",
"merged_at": "2022-08-11T14:03:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4826.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4826"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] |
https://api.github.com/repos/huggingface/datasets/issues/3020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3020/comments | https://api.github.com/repos/huggingface/datasets/issues/3020/events | https://github.com/huggingface/datasets/pull/3020 | 1,015,406,105 | PR_kwDODunzps4sprfa | 3,020 | Add a metric for the MATH dataset (competition_math). | [] | closed | false | null | 4 | 2021-10-04T16:52:16Z | 2021-10-22T10:29:31Z | 2021-10-22T10:29:31Z | null | This metric computes accuracy for the MATH dataset (https://arxiv.org/abs/2103.03874) after canonicalizing the prediction and the reference (e.g., converting "1/2" to "\\\\frac{1}{2}"). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3020/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3020/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3020.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3020",
"merged_at": "2021-10-22T10:29:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3020.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3020"
} | true | [
"I believe the only failed test related to this PR is tests/test_metric_common.py::LocalMetricTest::test_load_metric_competition_math. It gives the following error:\r\n\r\nImportError: To be able to use this dataset, you need to install the following dependencies['math_equivalence'] using 'pip install git+https://github.com/hendrycks/math.git' for instance'\r\n\r\nIt fails along with (these fail with ImportError as well):\r\ntest_load_metric_bertscore\r\ntest_load_metric_bleurt\r\ntest_load_metric_comet\r\ntest_load_metric_coval\r\n\r\nLet me know if there is anything I need to change.",
"Hi ! The script looks all good thanks :)\r\n\r\nTo fix the CI you just need to merge `master` into your branch\r\n```\r\ngit fetch upstream/master\r\ngit merge upstream/master\r\n```\r\n\r\nThen you also need to add `math_equivalence` to the list of git packages installed for the tests in `additional-tests-requirements.txt`\r\nhttps://github.com/huggingface/datasets/blob/ba831e4bcd175ae3d52afbf7d12c4f625bf541b0/additional-tests-requirements.txt#L1-L3",
"I ran:\r\n\r\ngit fetch upstream\r\ngit merge upstream/master\r\n\r\nAnd I also added math_equivalence to the list of git packages installed for the tests in additional-tests-requirements.txt\r\n\r\ntests/test_metric_common.py fails with the same errors as before. tests/test_dataset_cards.py also fails, but it doesn't look related to this PR (it's an issue datasets/ami/README.md).",
"@lhoestq Anything else I can do? I re-merged again and am getting the same test failures as described in the previous comment."
] |
https://api.github.com/repos/huggingface/datasets/issues/2100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2100/comments | https://api.github.com/repos/huggingface/datasets/issues/2100/events | https://github.com/huggingface/datasets/pull/2100 | 838,574,631 | MDExOlB1bGxSZXF1ZXN0NTk4NzMzOTM0 | 2,100 | Fix deprecated warning message and docstring | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 3 | 2021-03-23T10:27:52Z | 2021-03-24T08:19:41Z | 2021-03-23T18:03:49Z | null | Fix deprecated warnings:
- Use deprecated Sphinx directive in docstring
- Fix format of deprecated message
- Raise FutureWarning | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2100/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2100/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2100.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2100",
"merged_at": "2021-03-23T18:03:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2100.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2100"
} | true | [
"I have a question: what about `dictionary_encode_column_`?\r\n- It is deprecated in Dataset, but it recommends using a non-existing method instead: `Dataset.dictionary_encode_column` does not exist.\r\n- It is NOT deprecated in DatasetDict.",
"`dictionary_encode_column_ ` should be deprecated since it never worked correctly. It will be removed in a major release.\r\nThis has to be deprecated in `DatasetDict` as well.\r\nAnd `Dataset.dictionary_encode_column` doesn't exist indeed.",
"Thanks @lhoestq. I have fixed deprecated for `dictionary_encode_column_`."
] |
https://api.github.com/repos/huggingface/datasets/issues/567 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/567/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/567/comments | https://api.github.com/repos/huggingface/datasets/issues/567/events | https://github.com/huggingface/datasets/pull/567 | 691,430,245 | MDExOlB1bGxSZXF1ZXN0NDc4MTc2Njgx | 567 | Fix BLEURT metrics for backward compatibility | [] | closed | false | null | 0 | 2020-09-02T21:22:35Z | 2020-09-03T07:29:52Z | 2020-09-03T07:29:50Z | null | Fix #565 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/567/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/567/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/567.diff",
"html_url": "https://github.com/huggingface/datasets/pull/567",
"merged_at": "2020-09-03T07:29:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/567.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/567"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1259/comments | https://api.github.com/repos/huggingface/datasets/issues/1259/events | https://github.com/huggingface/datasets/pull/1259 | 758,565,320 | MDExOlB1bGxSZXF1ZXN0NTMzNzE4NjMz | 1,259 | Add KorQPair dataset | [] | closed | false | null | 2 | 2020-12-07T14:33:57Z | 2021-12-29T00:49:40Z | 2020-12-08T15:11:41Z | null | This PR adds a [Korean paired question dataset](https://github.com/songys/Question_pair) containing labels indicating whether two questions in a given pair are semantically identical. This dataset was used to evaluate the performance of [KoGPT2](https://github.com/SKT-AI/KoGPT2#subtask-evaluations) on a phrase detection downstream task. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1259/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1259/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1259.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1259",
"merged_at": "2020-12-08T15:11:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1259.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1259"
} | true | [
"dummy data is missing",
"Hey @cceyda, thanks for pointing that out. I thought I'd added it, but seems like that wasn't the case. Just pushed a new commit with the dummy data."
] |
https://api.github.com/repos/huggingface/datasets/issues/1624 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1624/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1624/comments | https://api.github.com/repos/huggingface/datasets/issues/1624/events | https://github.com/huggingface/datasets/issues/1624 | 773,669,700 | MDU6SXNzdWU3NzM2Njk3MDA= | 1,624 | Cannot download ade_corpus_v2 | [] | closed | false | null | 2 | 2020-12-23T10:58:14Z | 2021-08-03T05:08:54Z | 2021-08-03T05:08:54Z | null | I tried this to get the dataset following this url : https://huggingface.co/datasets/ade_corpus_v2
but received this error :
`Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/ade_corpus_v2/ade_corpus_v2.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 278, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/ade_corpus_v2/ade_corpus_v2.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 282, in prepare_module
combined_path, github_file_path, file_path
FileNotFoundError: Couldn't find file locally at ade_corpus_v2/ade_corpus_v2.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/ade_corpus_v2/ade_corpus_v2.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/ade_corpus_v2/ade_corpus_v2.py`
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1624/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1624/timeline | null | completed | null | null | false | [
"Hi @him1411, the dataset you are trying to load has been added during the community sprint and has not been released yet. It will be available with the v2 of `datasets`.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of `datasets` using pip:\r\n`pip install git+https://github.com/huggingface/datasets.git@master`",
"`ade_corpus_v2` was added recently, that's why it wasn't available yet.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `ade_corpus_v2` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"ade_corpus_v2\", \"Ade_corpos_v2_drug_ade_relation\")\r\n```\r\n\r\n(looks like there is a typo in the configuration name, we'll fix it for the v2.0 release of `datasets` soon)"
] |
https://api.github.com/repos/huggingface/datasets/issues/580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/580/comments | https://api.github.com/repos/huggingface/datasets/issues/580/events | https://github.com/huggingface/datasets/issues/580 | 694,954,551 | MDU6SXNzdWU2OTQ5NTQ1NTE= | 580 | nlp re-creates already-there caches when using a script, but not within a shell | [] | closed | false | null | 2 | 2020-09-07T10:23:50Z | 2020-09-07T15:19:09Z | 2020-09-07T14:26:41Z | null | `nlp` keeps creating new caches for the same file when launching `filter` from a script, and behaves correctly from within the shell.
Example: try running
```
import nlp
hans_easy_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 0)
hans_hard_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 1)
```
twice. If launched from a `file.py` script, the cache will be re-created the second time. If launched as 3 shell/`ipython` commands, `nlp` will correctly re-use the cache.
As observed with @lhoestq. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/580/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/580/timeline | null | completed | null | null | false | [
"Couln't reproduce on my side :/ \r\nlet me know if you manage to reproduce on another env (colab for example)",
"Fixed with a clean re-install!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2868 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2868/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2868/comments | https://api.github.com/repos/huggingface/datasets/issues/2868/events | https://github.com/huggingface/datasets/issues/2868 | 987,139,146 | MDU6SXNzdWU5ODcxMzkxNDY= | 2,868 | Add Common Objects in 3D (CO3D) | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] | open | false | null | 0 | 2021-09-02T20:36:12Z | 2021-12-08T12:02:10Z | null | null | ## Adding a Dataset
- **Name:** *Common Objects in 3D (CO3D)*
- **Description:** *See blog post [here](https://ai.facebook.com/blog/common-objects-in-3d-dataset-for-3d-reconstruction)*
- **Paper:** *[link to paper](https://arxiv.org/abs/2109.00512)*
- **Data:** *[link to data](https://ai.facebook.com/datasets/co3d-downloads/)*
- **Motivation:** *excerpt from above blog post:*
> As the first data set of its kind, CO3D will aptly enable reconstruction of real-life 3D objects. Indeed, CO3D already provides training data to enable our NeRFormer to tackle the new-view synthesis (NVS) task. Here, photorealistic NVS is a major step on the path to fully immersive AR/VR effects, where objects can be virtually transported across different environments, which will allow connecting users by sharing or recollecting their experiences.
>
> Besides practical applications in AR/VR, we hope that the data set will become a standard testbed for the recent proliferation of methods (including NeRFormer, Implicit Differentiable Renderer, NeRF, and others) that reconstruct 3D scenes by means of an implicit shape model.
>
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2868/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2868/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2144/comments | https://api.github.com/repos/huggingface/datasets/issues/2144/events | https://github.com/huggingface/datasets/issues/2144 | 844,352,067 | MDU6SXNzdWU4NDQzNTIwNjc= | 2,144 | Loading wikipedia 20200501.en throws pyarrow related error | [] | open | false | null | 6 | 2021-03-30T10:38:31Z | 2021-04-01T09:21:17Z | null | null | **Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931...
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14.6k/14.6k [00:00<00:00, 5.41MB/s]
Downloading: 59%|███████████████████████████████████████████████████████████████████████████████████████▊ | 10.7G/18.3G [11:30<08:08, 15.5MB/s]
Dataset wikipedia downloaded and prepared to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931. Subsequent calls will reuse this data.
Traceback (most recent call last):
File "load_wiki.py", line 2, in <module>
ds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache')
File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 751, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 746, in as_dataset
map_tuple=True,
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested
return function(data_struct)
File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 763, in _build_single_dataset
in_memory=in_memory,
File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 835, in _as_dataset
in_memory=in_memory,
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 215, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 236, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 171, in _read_files
pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename
pa_table = ArrowReader.read_table(filename, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 324, in read_table
pa_table = f.read_all()
File "pyarrow/ipc.pxi", line 544, in pyarrow.lib.RecordBatchReader.read_all
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
OSError: Expected to be able to read 9176784 bytes for message body, got 4918712
**Detailed version info**
datasets==1.5.0
- dataclasses [required: Any, installed: 0.8]
- dill [required: Any, installed: 0.3.3]
- fsspec [required: Any, installed: 0.8.7]
- importlib-metadata [required: Any, installed: 1.7.0]
- zipp [required: >=0.5, installed: 3.1.0]
- huggingface-hub [required: <0.1.0, installed: 0.0.7]
- filelock [required: Any, installed: 3.0.12]
- importlib-metadata [required: Any, installed: 1.7.0]
- zipp [required: >=0.5, installed: 3.1.0]
- requests [required: Any, installed: 2.24.0]
- certifi [required: >=2017.4.17, installed: 2020.6.20]
- chardet [required: >=3.0.2,<4, installed: 3.0.4]
- idna [required: >=2.5,<3, installed: 2.6]
- urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10]
- tqdm [required: Any, installed: 4.49.0]
- importlib-metadata [required: Any, installed: 1.7.0]
- zipp [required: >=0.5, installed: 3.1.0]
- multiprocess [required: Any, installed: 0.70.11.1]
- dill [required: >=0.3.3, installed: 0.3.3]
- numpy [required: >=1.17, installed: 1.17.0]
- pandas [required: Any, installed: 1.1.5]
- numpy [required: >=1.15.4, installed: 1.17.0]
- python-dateutil [required: >=2.7.3, installed: 2.8.0]
- six [required: >=1.5, installed: 1.15.0]
- pytz [required: >=2017.2, installed: 2020.1]
- pyarrow [required: >=0.17.1, installed: 3.0.0]
- numpy [required: >=1.16.6, installed: 1.17.0]
- requests [required: >=2.19.0, installed: 2.24.0]
- certifi [required: >=2017.4.17, installed: 2020.6.20]
- chardet [required: >=3.0.2,<4, installed: 3.0.4]
- idna [required: >=2.5,<3, installed: 2.6]
- urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10]
- tqdm [required: >=4.27,<4.50.0, installed: 4.49.0]
- xxhash [required: Any, installed: 2.0.0]
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2144/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2144/timeline | null | null | null | null | false | [
"That's how I loaded the dataset\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache')\r\n```",
"Hi ! It looks like the arrow file in the folder\r\n`/usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931` is corrupted.\r\n\r\nCan you take a look and check that it's 18.3GB ?\r\n\r\nIf not, then maybe you need to redownload it:\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache', download_mode=\"force_redownload\")\r\n```",
"> Hi ! It looks like the arrow file in the folder\r\n> `/usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931` is corrupted.\r\n> \r\n> Can you take a look and check that it's 18.3GB ?\r\n> \r\n> If not, then maybe you need to redownload it:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> ds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache', download_mode=\"force_redownload\")\r\n> ```\r\n\r\nHi Ihoestq, thanks for the reply! Actually i think my issue is i couldn't download the dataset beyond 10.7G. It feels like the whole dataset is split into different volumes and after the first one was downloaded it crashed before proceeding to the next one. I did try 'force_redownload' mode but still got the same issue.",
"I just tried on my side and got no issues.\r\nWhen downloading the dataset again, did it crash at 10.7GB as well ?",
"> I just tried on my side and got no issues.\r\n> When downloading the dataset again, did it crash at 10.7GB as well ?\r\n\r\nYes i have tried it multiple times on different machines. I am wondering if you could share the screenshot of your dependency versions and i will try to make them the same as yours?",
"I tried using `datasets` from `master` on macos with python 3.7.2\r\nI also have `requests==2.23.0` and `tqdm==4.45.0`."
] |
https://api.github.com/repos/huggingface/datasets/issues/5989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5989/comments | https://api.github.com/repos/huggingface/datasets/issues/5989/events | https://github.com/huggingface/datasets/issues/5989 | 1,774,134,091 | I_kwDODunzps5pvyNL | 5,989 | Set a rule on the config and split names | [] | open | false | null | 3 | 2023-06-26T07:34:14Z | 2023-07-19T14:22:54Z | null | null | > should we actually allow characters like spaces? maybe it's better to add validation for whitespace symbols and directly in datasets and raise
https://github.com/huggingface/datasets-server/issues/853
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5989/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5989/timeline | null | null | null | null | false | [
"in this case we need to decide what to do with the existing datasets with white space characters (there shouldn't be a lot of them I think)",
"I imagine that we should stop supporting them, and help the user fix them?",
"See a report where the datasets server fails: https://huggingface.co/datasets/poloclub/diffusiondb/discussions/2#6374ff55b93cbdf65675f564\r\n\r\nThe config name is `random_10k [2m]`!"
] |
https://api.github.com/repos/huggingface/datasets/issues/5536 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5536/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5536/comments | https://api.github.com/repos/huggingface/datasets/issues/5536/events | https://github.com/huggingface/datasets/issues/5536 | 1,586,930,643 | I_kwDODunzps5elqPT | 5,536 | Failure to hash function when using .map() | [] | closed | false | null | 8 | 2023-02-16T03:12:07Z | 2023-05-22T20:02:16Z | 2023-02-16T14:56:41Z | null | ### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed._
This issue with `.map()` happens for me consistently, as also described in closed issue #4506
Dataset indices can be individually serialized using dill and pickle without any errors. I'm using tiktoken to encode in the function passed to map(). Similarly, indices can be individually encoded without error.
### Steps to reproduce the bug
```py
from datasets import load_dataset
import tiktoken
dataset = load_dataset("stas/openwebtext-10k")
enc = tiktoken.get_encoding("gpt2")
tokenized = dataset.map(
process,
remove_columns=['text'],
desc="tokenizing the OWT splits",
)
def process(example):
ids = enc.encode(example['text'])
ids.append(enc.eot_token)
out = {'ids': ids, 'len': len(ids)}
return out
```
### Expected behavior
Should encode simple text objects.
### Environment info
Python versions tried: both 3.8 and 3.10.10
`PYTHONUTF8=1` as env variable
Datasets tried:
- stas/openwebtext-10k
- rotten_tomatoes
- local text file
OS: Ubuntu Linux 20.04
Package versions:
- torch 1.13.1
- dill 0.3.4 (if using 0.3.6 - same issue)
- datasets 2.9.0
- tiktoken 0.2.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5536/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5536/timeline | null | completed | null | null | false | [
"Hi ! `enc` is not hashable:\r\n```python\r\nimport tiktoken\r\nfrom datasets.fingerprint import Hasher\r\n\r\nenc = tiktoken.get_encoding(\"gpt2\")\r\nHasher.hash(enc)\r\n# raises TypeError: cannot pickle 'builtins.CoreBPE' object\r\n```\r\nIt happens because it's not picklable, and because of that it's not possible to cache the result of `map`, hence the warning message.\r\n\r\nYou can find more details about caching here: https://huggingface.co/docs/datasets/about_cache\r\n\r\nYou can also provide your own unique hash in `map` if you want, with the `new_fingerprint` argument.\r\nOr disable caching using\r\n```python\r\nimport datasets\r\ndatasets.disable_caching()\r\n```",
"@lhoestq Thank you for the explanation and advice. Will relay all of this to the repo where this (non)issue arose. \r\n\r\nGreat job with huggingface! ",
"We made tiktoken tokenizers hashable in #5552, which is included in today's release `datasets==2.10.0`",
"Just a heads up that when I'm trying to use TikToken along with the a given Dataset `.map()` method, I am still met with the following error :\r\n\r\n```\r\n File \"/opt/conda/lib/python3.8/site-packages/dill/_dill.py\", line 388, in save\r\n StockPickler.save(self, obj, save_persistent_id)\r\n File \"/opt/conda/lib/python3.8/pickle.py\", line 578, in save\r\n rv = reduce(self.proto)\r\nTypeError: cannot pickle 'builtins.CoreBPE' object\r\n```\r\n\r\nMy current environment is running datasets v2.10.0.",
"cc @mariosasko ",
"@lhoestq @edhenry I am also seeing this, do you have any suggested solution?",
"With which `datasets` version ? Can you try to udpate ?",
"@lhoestq @edhenry I am on datasets version `'2.12.0'. I see the same `TypeError: cannot pickle 'builtins.CoreBPE' object` that others are seeing.",
"I am able to reproduce this on datasets 2.14.2. The `datasets.disable_caching()` doesn't work around it.\r\n\r\n@lhoestq - you might want to reopen this issue. Because of this issue folks won't be able run Karpathy's NanoGPT :(."
] |
https://api.github.com/repos/huggingface/datasets/issues/1079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1079/comments | https://api.github.com/repos/huggingface/datasets/issues/1079/events | https://github.com/huggingface/datasets/pull/1079 | 756,652,427 | MDExOlB1bGxSZXF1ZXN0NTMyMTY4Nzky | 1,079 | nkjp-ner | [] | closed | false | null | 0 | 2020-12-03T22:47:26Z | 2020-12-04T09:42:06Z | 2020-12-04T09:42:06Z | null | - **Name:** *nkjp-ner*
- **Description:** *The NKJP-NER is based on a human-annotated part of NKJP. We extracted sentences with named entities of exactly one type. The task is to predict the type of the named entity.*
- **Data:** *https://klejbenchmark.com/tasks/*
- **Motivation:** *The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding.*
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1079/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1079/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1079.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1079",
"merged_at": "2020-12-04T09:42:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1079.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1079"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3790/comments | https://api.github.com/repos/huggingface/datasets/issues/3790/events | https://github.com/huggingface/datasets/pull/3790 | 1,150,646,899 | PR_kwDODunzps4zedMa | 3,790 | Add doc builder scripts | [] | closed | false | null | 3 | 2022-02-25T16:38:47Z | 2022-03-01T15:55:42Z | 2022-03-01T15:55:41Z | null | I added the three scripts:
- build_dev_documentation.yml
- build_documentation.yml
- delete_dev_documentation.yml
I got them from `transformers` and did a few changes:
- I removed the `transformers`-specific dependencies
- I changed all the paths to be "datasets" instead of "transformers"
- I passed the `--library_name datasets` arg to the `doc-builder build` command (according to https://github.com/huggingface/doc-builder/pull/94/files#diff-bcc33cf7c223511e498776684a9a433810b527a0a38f483b1487e8a42b6575d3R26)
cc @LysandreJik @mishig25 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3790/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3790/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3790.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3790",
"merged_at": "2022-03-01T15:55:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3790.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3790"
} | true | [
"I think we're only missing the hosted runner to be configured for this repository and we should be good",
"Regarding the self-hosted runner, I actually encourage using the approach defined here: https://github.com/huggingface/transformers/pull/15710, which doesn't leverage a self-hosted runner. This prevents queuing jobs, which is important when we expect several concurrent jobs.",
"Opened a PR for that on your branch here: https://github.com/huggingface/datasets/pull/3793"
] |
https://api.github.com/repos/huggingface/datasets/issues/4768 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4768/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4768/comments | https://api.github.com/repos/huggingface/datasets/issues/4768/events | https://github.com/huggingface/datasets/pull/4768 | 1,321,913,645 | PR_kwDODunzps48TRUH | 4,768 | Unpin rouge_score test dependency | [] | closed | false | null | 1 | 2022-07-29T08:17:40Z | 2022-07-29T16:42:28Z | 2022-07-29T16:29:17Z | null | Once `rouge-score` has made the 0.1.2 release to fix their issue https://github.com/google-research/google-research/issues/1212, we can unpin it.
Related to:
- #4735 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4768/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4768/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4768.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4768",
"merged_at": "2022-07-29T16:29:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4768.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4768"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/185/comments | https://api.github.com/repos/huggingface/datasets/issues/185/events | https://github.com/huggingface/datasets/pull/185 | 623,172,484 | MDExOlB1bGxSZXF1ZXN0NDIxODkxNjY2 | 185 | [Commands] In-detail instructions to create dummy data folder | [] | closed | false | null | 1 | 2020-05-22T12:26:25Z | 2020-05-22T14:06:35Z | 2020-05-22T14:06:34Z | null | ### Dummy data command
This PR adds a new command `python nlp-cli dummy_data <path_to_dataset_folder>` that gives in-detail instructions on how to add the dummy data files.
It would be great if you can try it out by moving the current dummy_data folder of any dataset in `./datasets` with `mv datasets/<dataset_script>/dummy_data datasets/<dataset_name>/dummy_data_copy` and running the command `python nlp-cli dummy_data ./datasets/<dataset_name>` to see if you like the instructions.
### CONTRIBUTING.md
Also the CONTRIBUTING.md is made cleaner including a new section on "How to add a dataset".
### Current PRs
It would be nice if we can try out if this command helps current PRs, *e.g.* #169 to add a dataset. I comment on those PRs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/185/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/185/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/185.diff",
"html_url": "https://github.com/huggingface/datasets/pull/185",
"merged_at": "2020-05-22T14:06:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/185.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/185"
} | true | [
"awesome !"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.