url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
โ | active_lock_reason
null | body
stringlengths 0
228k
โ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3258 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3258/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3258/comments | https://api.github.com/repos/huggingface/datasets/issues/3258/events | https://github.com/huggingface/datasets/issues/3258 | 1,052,188,195 | I_kwDODunzps4-tx4j | 3,258 | Reload dataset that was already downloaded with `load_from_disk` from cloud storage | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 0 | 2021-11-12T17:14:59Z | 2021-11-12T17:14:59Z | null | null | `load_from_disk` downloads the dataset to a temporary directory without checking if the dataset has already been downloaded once.
It would be nice to have some sort of caching for datasets downloaded this way. This could leverage the fingerprint of the dataset that was saved in the `state.json` file. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3258/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3258/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3655 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3655/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3655/comments | https://api.github.com/repos/huggingface/datasets/issues/3655/events | https://github.com/huggingface/datasets/issues/3655 | 1,119,801,077 | I_kwDODunzps5Cvs71 | 3,655 | Pubmed dataset not reachable | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 6 | 2022-01-31T18:45:47Z | 2022-12-19T19:18:10Z | 2022-02-14T14:15:41Z | null | ## Describe the bug
Trying to use the `pubmed` dataset fails to reach / download the source files.
## Steps to reproduce the bug
```python
pubmed_train = datasets.load_dataset('pubmed', split='train')
```
## Expected results
Should begin downloading the pubmed dataset.
## Actual results
```
ConnectionError: Couldn't reach ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0865.xml.gz (InvalidSchema("No connection adapters were found for 'ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0865.xml.gz'"))
```
## Environment info
- `datasets` version: 1.18.2
- Platform: macOS-11.4-x86_64-i386-64bit
- Python version: 3.8.2
- PyArrow version: 6.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3655/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3655/timeline | null | completed | null | null | false | [
"Hi @abhi-mosaic, thanks for reporting.\r\n\r\nI'm looking at it... ",
"also hitting this issue",
"Hey @albertvillanova, sorry to reopen this... I can confirm that on `master` branch the dataset is downloadable now but it is still broken in streaming mode:\r\n\r\n```python\r\n >>> import datasets\r\n >>> pubmed_train = datasets.load_dataset('pubmed', split='train', streaming=True)\r\n >>> next(iter(pubmed_train))\r\n```\r\n```\r\n No such file or directory: 'gzip://pubmed22n0001.xml::ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed22n0001.xml.gz'\r\n```\r\n",
"Hi @abhi-mosaic, would you mind opening another issue for this new problem?\r\n\r\nFirst issue (already solved) was a ConnectionError due to the yearly update release of PubMed: we fixed it by updating the URLs from year 2021 to year 2022.\r\n\r\nHowever this is another problem: to make pubmed streamable. Please note that NOT all our datastes are streamable: we are making streamable more and more of them... but this is an on-going process...\r\n\r\nThanks.",
"@albertvillanova \r\nWhen I tried below codes, I got the similar error\r\n\r\n```\r\n\r\ndataset=load_dataset(\"pubmed\",split=\"train\")\r\n\r\nCouldn't reach ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0601.xml.gz\r\n```",
"@y-rok you need to update `datasets`:\r\n```shell\r\npip install -U datasets\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/4665 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4665/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4665/comments | https://api.github.com/repos/huggingface/datasets/issues/4665/events | https://github.com/huggingface/datasets/issues/4665 | 1,299,652,638 | I_kwDODunzps5NdyAe | 4,665 | Unable to create dataset having Python dataset script only | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-07-09T11:45:46Z | 2022-07-11T07:10:09Z | 2022-07-11T07:10:01Z | null | ## Describe the bug
Hi there,
I'm trying to add the following dataset to Huggingface datasets: https://huggingface.co/datasets/Heriot-WattUniversity/dialog-babi/blob/
I'm trying to do so using the CLI commands but seems that this command generates the wrong `dataset_info.json` file (you can find it in the repo already):
```
datasets-cli test Heriot-WattUniversity/dialog-babi/dialog_babi.py --save_infos --all-configs
```
while it errors when I remove the python script:
```
datasets-cli test Heriot-WattUniversity/dialog-babi/ --save_infos --all-configs
```
The error message is the following:
```
FileNotFoundError: Unable to resolve any data file that matches '['**']' at /Users/as2180/workspace/Heriot-WattUniversity/dialog-babi with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']
```
## Environment info
- `datasets` version: 2.3.2
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.9
- PyArrow version: 8.0.0
- Pandas version: 1.4.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4665/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4665/timeline | null | completed | null | null | false | [
"Hi @aleSuglia, thanks for reporting.\r\n\r\nWe are having a look at it. \r\n\r\nWe transfer this issue to the Community tab of the corresponding Hub dataset: https://huggingface.co/datasets/Heriot-WattUniversity/dialog-babi/discussions"
] |
https://api.github.com/repos/huggingface/datasets/issues/4306 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4306/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4306/comments | https://api.github.com/repos/huggingface/datasets/issues/4306/events | https://github.com/huggingface/datasets/issues/4306 | 1,231,137,204 | I_kwDODunzps5JYam0 | 4,306 | `load_dataset` does not work with certain filename. | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-05-10T13:14:04Z | 2022-05-10T18:58:36Z | 2022-05-10T18:58:09Z | null | ## Describe the bug
This is a weird bug that took me some time to find out.
I have a JSON dataset that I want to load with `load_dataset` like this:
```
data_files = dict(train="train.json.zip", val="val.json.zip")
dataset = load_dataset("json", data_files=data_files, field="data")
```
## Expected results
No error.
## Actual results
The val file is loaded as expected, but the train file throws JSON decoding error:
```
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ <ipython-input-74-97947e92c100>:5 in <module> โ
โ โ
โ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/load.py:1687 in โ
โ load_dataset โ
โ โ
โ 1684 โ try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES โ
โ 1685 โ โ
โ 1686 โ # Download and prepare data โ
โ โฑ 1687 โ builder_instance.download_and_prepare( โ
โ 1688 โ โ download_config=download_config, โ
โ 1689 โ โ download_mode=download_mode, โ
โ 1690 โ โ ignore_verifications=ignore_verifications, โ
โ โ
โ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:605 in โ
โ download_and_prepare โ
โ โ
โ 602 โ โ โ โ โ โ except ConnectionError: โ
โ 603 โ โ โ โ โ โ โ logger.warning("HF google storage unreachable. Downloa โ
โ 604 โ โ โ โ โ if not downloaded_from_gcs: โ
โ โฑ 605 โ โ โ โ โ โ self._download_and_prepare( โ
โ 606 โ โ โ โ โ โ โ dl_manager=dl_manager, verify_infos=verify_infos, **do โ
โ 607 โ โ โ โ โ โ ) โ
โ 608 โ โ โ โ โ # Sync info โ
โ โ
โ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:694 in โ
โ _download_and_prepare โ
โ โ
โ 691 โ โ โ โ
โ 692 โ โ โ try: โ
โ 693 โ โ โ โ # Prepare split will record examples associated to the split โ
โ โฑ 694 โ โ โ โ self._prepare_split(split_generator, **prepare_split_kwargs) โ
โ 695 โ โ โ except OSError as e: โ
โ 696 โ โ โ โ raise OSError( โ
โ 697 โ โ โ โ โ "Cannot find data file. " โ
โ โ
โ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:1151 in โ
โ _prepare_split โ
โ โ
โ 1148 โ โ โ
โ 1149 โ โ generator = self._generate_tables(**split_generator.gen_kwargs) โ
โ 1150 โ โ with ArrowWriter(features=self.info.features, path=fpath) as writer: โ
โ โฑ 1151 โ โ โ for key, table in logging.tqdm( โ
โ 1152 โ โ โ โ generator, unit=" tables", leave=False, disable=True # not loggin โ
โ 1153 โ โ โ ): โ
โ 1154 โ โ โ โ writer.write_table(table) โ
โ โ
โ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/notebook.py:257 in โ
โ __iter__ โ
โ โ
โ 254 โ โ
โ 255 โ def __iter__(self): โ
โ 256 โ โ try: โ
โ โฑ 257 โ โ โ for obj in super(tqdm_notebook, self).__iter__(): โ
โ 258 โ โ โ โ # return super(tqdm...) will not catch exception โ
โ 259 โ โ โ โ yield obj โ
โ 260 โ โ # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt โ
โ โ
โ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/std.py:1183 in โ
โ __iter__ โ
โ โ
โ 1180 โ โ # If the bar is disabled, then just walk the iterable โ
โ 1181 โ โ # (note: keep this check outside the loop for performance) โ
โ 1182 โ โ if self.disable: โ
โ โฑ 1183 โ โ โ for obj in iterable: โ
โ 1184 โ โ โ โ yield obj โ
โ 1185 โ โ โ return โ
โ โ
โ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/packaged_modules/j โ
โ son/json.py:90 in _generate_tables โ
โ โ
โ 87 โ โ โ # If the file is one json object and if we need to look at the list of โ
โ 88 โ โ โ if self.config.field is not None: โ
โ 89 โ โ โ โ with open(file, encoding="utf-8") as f: โ
โ โฑ 90 โ โ โ โ โ dataset = json.load(f) โ
โ 91 โ โ โ โ โ
โ 92 โ โ โ โ # We keep only the field we are interested in โ
โ 93 โ โ โ โ dataset = dataset[self.config.field] โ
โ โ
โ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:293 in load โ
โ โ
โ 290 โ To use a custom ``JSONDecoder`` subclass, specify it with the ``cls`` โ
โ 291 โ kwarg; otherwise ``JSONDecoder`` is used. โ
โ 292 โ """ โ
โ โฑ 293 โ return loads(fp.read(), โ
โ 294 โ โ cls=cls, object_hook=object_hook, โ
โ 295 โ โ parse_float=parse_float, parse_int=parse_int, โ
โ 296 โ โ parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) โ
โ โ
โ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:357 in loads โ
โ โ
โ 354 โ if (cls is None and object_hook is None and โ
โ 355 โ โ โ parse_int is None and parse_float is None and โ
โ 356 โ โ โ parse_constant is None and object_pairs_hook is None and not kw): โ
โ โฑ 357 โ โ return _default_decoder.decode(s) โ
โ 358 โ if cls is None: โ
โ 359 โ โ cls = JSONDecoder โ
โ 360 โ if object_hook is not None: โ
โ โ
โ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:337 in decode โ
โ โ
โ 334 โ โ containing a JSON document). โ
โ 335 โ โ โ
โ 336 โ โ """ โ
โ โฑ 337 โ โ obj, end = self.raw_decode(s, idx=_w(s, 0).end()) โ
โ 338 โ โ end = _w(s, end).end() โ
โ 339 โ โ if end != len(s): โ
โ 340 โ โ โ raise JSONDecodeError("Extra data", s, end) โ
โ โ
โ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:353 in raw_decode โ
โ โ
โ 350 โ โ โ
โ 351 โ โ """ โ
โ 352 โ โ try: โ
โ โฑ 353 โ โ โ obj, end = self.scan_once(s, idx) โ
โ 354 โ โ except StopIteration as err: โ
โ 355 โ โ โ raise JSONDecodeError("Expecting value", s, err.value) from None โ
โ 356 โ โ return obj, end โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
JSONDecodeError: Unterminated string starting at: line 85 column 20 (char 60051)
```
However, when I rename the `train.json.zip` to other names (like `training.json.zip`, or even to `train.json`), everything works fine; when I unzip the file to `train.json`, it works as well.
## Environment info
```
- `datasets` version: 2.1.0
- Platform: Linux-4.4.0-131-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0
- Pandas version: 1.4.2
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4306/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4306/timeline | null | completed | null | null | false | [
"Never mind. It is because of the caching of datasets..."
] |
https://api.github.com/repos/huggingface/datasets/issues/705 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/705/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/705/comments | https://api.github.com/repos/huggingface/datasets/issues/705/events | https://github.com/huggingface/datasets/issues/705 | 713,709,100 | MDU6SXNzdWU3MTM3MDkxMDA= | 705 | TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' | [] | closed | false | null | 2 | 2020-10-02T15:27:55Z | 2020-10-05T08:14:59Z | 2020-10-05T08:14:59Z | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1 (installed from master)
- `datasets` version: 1.0.2 (installed as a dependency from transformers)
- Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.9
I'm testing my own text classification dataset using [this example](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) from transformers. The dataset is split into train / dev / test, and in csv format, containing just a text and a label columns, using comma as sep. Here's a sample:
```
text,label
"Registra-se a presenรงa do acadรชmico <name> . <REL_SEP> Ao me deparar com a descriรงรฃo de dois autores no polo ativo da aรงรฃo junto ao PJe , margem esquerda foi informado pela procuradora do reclamante que se trata de uma reclamaรงรฃo trabalhista individual . <REL_SEP> Diante disso , face a ausรชncia injustificada do autor <name> , determina-se o ARQUIVAMENTO do presente processo , com relaรงรฃo a este , nos termos do [[ art . 844 da CLT ]] . <REL_SEP> CUSTAS AUTOR - DISPENSADO <REL_SEP> Custas pelo autor no importe de R $326,82 , calculadas sobre R $16.341,03 , dispensadas na forma da lei , em virtude da concessรฃo dos benefรญcios da Justiรงa Gratuita , ora deferida . <REL_SEP> Cientes os presentes . <REL_SEP> Audiรชncia encerrada ร s 8h42min . <REL_SEP> <name> <REL_SEP> Juรญza do Trabalho <REL_SEP> Ata redigida por << <name> >> , Secretรกrio de Audiรชncia .",NO_RELATION
```
However, @Santosh-Gupta reported in #7351 that he had the exact same problem using the ChemProt dataset. His colab notebook is referenced in the following section.
## To reproduce
Steps to reproduce the behavior:
1. Created a new conda environment using conda env -n transformers python=3.7
2. Cloned transformers master, `cd` into it and installed using pip install --editable . -r examples/requirements.txt
3. Installed tensorflow with `pip install tensorflow`
3. Ran `run_tf_text_classification.py` with the following parameters:
```
--train_file <DATASET_PATH>/train.csv \
--dev_file <DATASET_PATH>/dev.csv \
--test_file <DATASET_PATH>/test.csv \
--label_column_id 1 \
--model_name_or_path neuralmind/bert-base-portuguese-cased \
--output_dir <OUTPUT_PATH> \
--num_train_epochs 4 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--do_train \
--do_eval \
--do_predict \
--logging_steps 1000 \
--evaluate_during_training \
--save_steps 1000 \
--overwrite_output_dir \
--overwrite_cache
```
I have also copied [@Santosh-Gupta 's colab notebook](https://colab.research.google.com/drive/11APei6GjphCZbH5wD9yVlfGvpIkh8pwr?usp=sharing) as a reference.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Here is the stack trace:
```
2020-10-02 07:33:41.622011: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
/media/discoD/repositorios/transformers_pedro/src/transformers/training_args.py:333: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)
FutureWarning,
2020-10-02 07:33:43.471648: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-10-02 07:33:43.471791: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.472664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1
coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s
2020-10-02 07:33:43.472684: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-02 07:33:43.472765: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-02 07:33:43.472809: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-10-02 07:33:43.472848: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-10-02 07:33:43.474209: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-10-02 07:33:43.474276: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-10-02 07:33:43.561219: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-10-02 07:33:43.561397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.562345: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.563219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-10-02 07:33:43.563595: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-10-02 07:33:43.570091: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3591830000 Hz
2020-10-02 07:33:43.570494: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560842432400 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-10-02 07:33:43.570511: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-10-02 07:33:43.570702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.571599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1
coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s
2020-10-02 07:33:43.571633: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-02 07:33:43.571645: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-02 07:33:43.571654: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-10-02 07:33:43.571664: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-10-02 07:33:43.571691: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-10-02 07:33:43.571704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-10-02 07:33:43.571718: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-10-02 07:33:43.571770: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.572641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.573475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-10-02 07:33:47.139227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-02 07:33:47.139265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2020-10-02 07:33:47.139272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2020-10-02 07:33:47.140323: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.141248: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.142085: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.142854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5371 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-10-02 07:33:47.146317: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5608b95dc5c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-10-02 07:33:47.146336: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1070, Compute Capability 6.1
10/02/2020 07:33:47 - INFO - __main__ - n_replicas: 1, distributed training: False, 16-bits training: False
10/02/2020 07:33:47 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(output_dir='/media/discoD/models/datalawyer/pedidos/transformers_tf', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Oct02_07-33-43_user-XPS-8700', logging_first_step=False, logging_steps=1000, save_steps=1000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name='/media/discoD/models/datalawyer/pedidos/transformers_tf', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=False, tpu_name=None, xla=False)
10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 acquired on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock
10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 released on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock
Using custom data configuration default
Traceback (most recent call last):
File "run_tf_text_classification.py", line 283, in <module>
main()
File "run_tf_text_classification.py", line 222, in main
max_seq_length=data_args.max_seq_length,
File "run_tf_text_classification.py", line 43, in get_tfds
ds = datasets.load_dataset("csv", data_files=files)
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/load.py", line 604, in load_dataset
**config_kwargs,
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 158, in __init__
**config_kwargs,
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 269, in _create_builder_config
for key in sorted(data_files.keys()):
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
```
## Expected behavior
Should be able to run the text-classification example as described in [https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow)
Originally opened this issue at transformers' repository: [https://github.com/huggingface/transformers/issues/7535](https://github.com/huggingface/transformers/issues/7535). @jplu instructed me to open here, since according to [this](https://github.com/huggingface/transformers/issues/7535#issuecomment-702778885) evidence, the problem is from datasets.
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/705/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/705/timeline | null | completed | null | null | false | [
"Hi !\r\nThanks for reporting :) \r\nIndeed this is an issue on the `datasets` side.\r\nI'm creating a PR",
"Thanks @lhoestq !"
] |
https://api.github.com/repos/huggingface/datasets/issues/1221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1221/comments | https://api.github.com/repos/huggingface/datasets/issues/1221/events | https://github.com/huggingface/datasets/pull/1221 | 758,016,032 | MDExOlB1bGxSZXF1ZXN0NTMzMjYxNjkw | 1,221 | Add HKCanCor | [] | closed | false | null | 0 | 2020-12-06T20:32:07Z | 2020-12-09T16:34:18Z | 2020-12-09T16:34:18Z | null | This PR adds the [Hong Kong Cantonese Corpus](http://compling.hss.ntu.edu.sg/hkcancor/), by [Luke and Wong 2015](http://compling.hss.ntu.edu.sg/hkcancor/data/LukeWong_Hong-Kong-Cantonese-Corpus.pdf).
The dummy data included here was manually created, as the original dataset uses a xml-like format (see a copy hosted [here](https://github.com/fcbond/hkcancor/blob/master/sample/d1_v.txt) for example) that requires a few processing steps. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1221/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1221/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1221.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1221",
"merged_at": "2020-12-09T16:34:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1221.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1221"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1138/comments | https://api.github.com/repos/huggingface/datasets/issues/1138/events | https://github.com/huggingface/datasets/pull/1138 | 757,378,406 | MDExOlB1bGxSZXF1ZXN0NTMyNzY1NTI2 | 1,138 | updated after the class name update | [] | closed | false | null | 0 | 2020-12-04T20:19:43Z | 2020-12-05T15:43:32Z | 2020-12-05T15:43:32Z | null | @lhoestq <--- | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1138/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1138/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1138.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1138",
"merged_at": "2020-12-05T15:43:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1138.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1138"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4596 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4596/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4596/comments | https://api.github.com/repos/huggingface/datasets/issues/4596/events | https://github.com/huggingface/datasets/issues/4596 | 1,288,381,735 | I_kwDODunzps5MyyUn | 4,596 | Dataset Viewer issue for universal_dependencies | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 2 | 2022-06-29T08:50:29Z | 2022-09-07T11:29:28Z | 2022-09-07T11:29:27Z | null | ### Link
https://huggingface.co/datasets/universal_dependencies
### Description
invalid json response body at https://datasets-server.huggingface.co/splits?dataset=universal_dependencies reason: Unexpected token I in JSON at position 0
### Owner
_No response_ | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4596/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4596/timeline | null | completed | null | null | false | [
"Thanks, looking at it!",
"Finally fixed! We updated the dataset viewer and it fixed the issue.\r\n\r\nhttps://huggingface.co/datasets/universal_dependencies/viewer/aqz_tudet/train\r\n\r\n<img width=\"1561\" alt=\"Capture dโeฬcran 2022-09-07 aฬ 13 29 18\" src=\"https://user-images.githubusercontent.com/1676121/188867795-4f7dd438-d4f2-46cd-8a92-20a37fb2d6bc.png\">\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/785 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/785/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/785/comments | https://api.github.com/repos/huggingface/datasets/issues/785/events | https://github.com/huggingface/datasets/pull/785 | 733,719,419 | MDExOlB1bGxSZXF1ZXN0NTEzNDMyNTM1 | 785 | feat(aslg_pc12): add dev and test data splits | [] | closed | false | null | 2 | 2020-10-31T13:25:38Z | 2020-11-10T15:29:30Z | 2020-11-10T15:29:30Z | null | For reproducibility sake, it's best if there are defined dev and test splits.
The original paper author did not define splits for the entire dataset, not for the sample loaded via this library, so I decided to define:
- 5/7th for train
- 1/7th for dev
- 1/7th for test
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/785/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/785/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/785.diff",
"html_url": "https://github.com/huggingface/datasets/pull/785",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/785.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/785"
} | true | [
"Hi ! I'm not sure we should make this split decision arbitrarily on our side. Users can split it afterwards to whatever they want using `dataset.train_test_split` for example.\r\nMoreover it looks like there's already papers that use this dataset and propose their own splits ([here](http://xanthippi.ceid.upatras.gr/HealthSign/resources/Publications/sitis_paper_25_10.pdf) 80-20) \r\nWhat do you think ?",
"I was not aware of the `train_test_split` method, thanks!\r\nSoe ven though it contributes to reproducibility, no need to do this split then."
] |
https://api.github.com/repos/huggingface/datasets/issues/3212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3212/comments | https://api.github.com/repos/huggingface/datasets/issues/3212/events | https://github.com/huggingface/datasets/issues/3212 | 1,044,640,967 | I_kwDODunzps4-Q_TH | 3,212 | Sort files before loading | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 1 | 2021-11-04T11:08:31Z | 2021-11-05T17:49:58Z | 2021-11-05T17:49:58Z | null | When loading a dataset that consists of several files (e.g. `my_data/data_001.json`, `my_data/data_002.json` etc.) they are not loaded in order when using `load_dataset("my_data")`.
This could lead to counter-intuitive results if, for example, the data files are sorted by date or similar since they would appear in different order in the `Dataset`.
The straightforward solution is to sort the list of files alphabetically before loading them.
cc @lhoestq
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3212/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3212/timeline | null | completed | null | null | false | [
"This will be fixed by https://github.com/huggingface/datasets/pull/3221"
] |
https://api.github.com/repos/huggingface/datasets/issues/161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/161/comments | https://api.github.com/repos/huggingface/datasets/issues/161/events | https://github.com/huggingface/datasets/issues/161 | 620,487,535 | MDU6SXNzdWU2MjA0ODc1MzU= | 161 | Discussion on version identifier & MockDataLoaderManager for test data | [
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | open | false | null | 12 | 2020-05-18T20:31:30Z | 2020-05-24T18:10:03Z | null | null | Hi, I'm working on adding a dataset and ran into an error due to `download` not being defined on `MockDataLoaderManager`, but being defined in `nlp/utils/download_manager.py`. The readme step running this: `RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_localmydatasetname` triggers the error. If I can get something to work, I can include it in my data PR once I'm done. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/161/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/161/timeline | null | null | null | null | false | [
"usually you can replace `download` in your dataset script with `download_and_prepare()` - could you share the code for your dataset here? :-) ",
"I have an initial version here: https://github.com/EntilZha/nlp/tree/master/datasets/qanta Thats pretty close to what I'll do as a PR, but still want to do some more sanity checks/tests (just got tests passing).\r\n\r\nI figured out how to get all tests passing by adding a download command and some finagling with the data zip https://github.com/EntilZha/nlp/blob/master/tests/utils.py#L127\r\n\r\n",
"I'm quite positive that you can just replace the `dl_manager.download()` statements here: https://github.com/EntilZha/nlp/blob/4d46443b65f1f756921db8275594e6af008a1de7/datasets/qanta/qanta.py#L194 with `dl_manager.download_and_extract()` even though you don't extract anything. I would prefer to avoid adding more functions to the MockDataLoadManager and keep it as simple as possible (It's already to complex now IMO). \r\n\r\nCould you check if you can replace the `download()` function? ",
"I might be doing something wrong, but swapping those two gives this error:\r\n```\r\n> with open(path) as f:\r\nE IsADirectoryError: [Errno 21] Is a directory: 'datasets/qanta/dummy/mode=first,char_skip=25/2018.4.18/dummy_data-zip-extracted/dummy_data'\r\n\r\nsrc/nlp/datasets/qanta/3d965403133687b819905ead4b69af7bcee365865279b2f797c79f809b4490c3/qanta.py:280: IsADirectoryError\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n```\r\n\r\nSo it seems like the directory name is getting passed. Is this not functioning as expected, or is there some caching happening maybe? I deleted the dummy files and re-ran the import script with no changes. I'm digging a bit in with a debugger, but no clear reason yet",
"From what I can tell here: https://github.com/huggingface/nlp/blob/master/tests/utils.py#L115\r\n\r\n1. `data_url` is the correct http link\r\n2. `path_to_dummy_data` is a directory, which is causing the issue\r\n\r\nThat path comes from `download_dummy_data`, which I think assumes that the data comes from the zip file, but isn't aware of individual files. So it seems like it data manager needs to be aware if the url its getting is for a file or a zip/directory, and pass this information along. This might happen in `download_dummy_data`, but probably better to happen in `download_and_extract`? Maybe a simple check to see if `os.path.basename` returns the dummy data zip filename, if not then join paths with the basename of the url?",
"I think the dataset script works correctly. Just the dummy data structure seems to be wrong. I will soon add more commands that should make the create of the dummy data easier.\r\n\r\nI'd recommend that you won't concentrate too much on the dummy data.\r\nIf you manage to load the dataset correctly via:\r\n\r\n```python \r\n# use local path to qanta\r\nnlp.load_dataset(\"./datasets/qanta\")\r\n```\r\n\r\nthen feel free to open a PR and we will look into the dummy data problem together :-) \r\n\r\nAlso please make sure that the Version is in the format 1.0.0 (three numbers separated by two points) - not a date. ",
"The script loading seems to work fine so I'll work on getting a PR open after a few sanity checks on the data.\r\n\r\nOn version, we currently have it versioned with YYYY.MM.DD scheme so it would be nice to not change that, but will it cause issues?",
"> The script loading seems to work fine so I'll work on getting a PR open after a few sanity checks on the data.\r\n> \r\n> On version, we currently have it versioned with YYYY.MM.DD scheme so it would be nice to not change that, but will it cause issues?\r\n\r\nIt would cause issues for sure for the tests....not sure if it would also cause issues otherwise.\r\n\r\nI would prefer to keep the same version style as we have for other models. You could for example simply add version 1.0.0 and add a comment with the date you currently use for the versioning.\r\n\r\n What is your opinion regarding the version here @lhoestq @mariamabarham @thomwolf ? ",
"Maybe use the YYYY.MM.DD as the config name ? That's what we are doing for wikipedia",
"> Maybe use the YYYY.MM.DD as the config name ? That's what we are doing for wikipedia\r\n\r\nI'm not sure if this will work because the name should be unique and it seems that he has multiple config name in his data with the same version.\r\nAs @patrickvonplaten suggested, I think you can add a comment about the version in the data description.",
"Actually maybe our versioning format (inherited from tfds) is too strong for what we use it for?\r\nWe could allow any string maybe?\r\n\r\nI see it more and more like an identifier for the user that we will back with a serious hashing/versioning system.- so we could let the user quite free on it.",
"I'm good with either putting it in description, adding it to the config, or loosening version formatting. I mostly don't have a full conceptual grasp of what each identifier ends up meaning in the datasets code so hard to evaluate the best approach.\r\n\r\nFor background, the multiple formats is a consequence of:\r\n\r\n1. Each example is one multi-sentence trivia question\r\n2. For training, its better to treat each sentence as an example\r\n3. For evaluation, should test on: (1) first sentence, (2) full question, and (3) partial questions (does the model get the question right having seen the first half)\r\n\r\nWe use the date format for version since: (1) we expect some degree of updates since new questions come in every year and (2) the timestamp itself matches the Wikipedia dump that it is dependent on (so similar to the Wikipedia dataset).\r\n\r\nperhaps this is better discussed in https://github.com/huggingface/nlp/pull/169 or update title?"
] |
https://api.github.com/repos/huggingface/datasets/issues/3847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3847/comments | https://api.github.com/repos/huggingface/datasets/issues/3847/events | https://github.com/huggingface/datasets/issues/3847 | 1,161,856,417 | I_kwDODunzps5FQIWh | 3,847 | Datasets' cache not re-used | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 13 | 2022-03-07T19:55:15Z | 2023-02-02T23:35:45Z | null | null | ## Describe the bug
For most tokenizers I have tested (e.g. the RoBERTa tokenizer), the data preprocessing cache are not fully reused in the first few runs, although their `.arrow` cache files are in the cache directory.
## Steps to reproduce the bug
Here is a reproducer. The GPT2 tokenizer works perfectly with caching, but not the RoBERTa tokenizer in this example.
```python
from datasets import load_dataset
from transformers import AutoTokenizer
raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1")
# tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
text_column_name = "text"
column_names = raw_datasets["train"].column_names
def tokenize_function(examples):
return tokenizer(examples[text_column_name], return_special_tokens_mask=True)
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
remove_columns=column_names,
load_from_cache_file=True,
desc="Running tokenizer on every text in dataset",
)
```
## Expected results
No tokenization would be required after the 1st run. Everything should be loaded from the cache.
## Actual results
Tokenization for some subsets are repeated at the 2nd and 3rd run. Starting from the 4th run, everything are loaded from cache.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Ubuntu 18.04.6 LTS
- Python version: 3.6.9
- PyArrow version: 6.0.1
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3847/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3847/timeline | null | null | null | null | false | [
"<s>I think this is because the tokenizer is stateful and because the order in which the splits are processed is not deterministic. Because of that, the hash of the tokenizer may change for certain splits, which causes issues with caching.\r\n\r\nTo fix this we can try making the order of the splits deterministic for map.</s>",
"Actually this is not because of the order of the splits, but most likely because the tokenizer used to process the second split is in a state that has been modified by the first split.\r\n\r\nTherefore after reloading the first split from the cache, then the second split can't be reloaded since the tokenizer hasn't seen the first split (and therefore is considered a different tokenizer).\r\n\r\nThis is a bit trickier to fix, we can explore fixing this next week maybe",
"Sorry didn't have the bandwidth to take care of this yet - will re-assign when I'm diving into it again !",
"I had this issue with `run_speech_recognition_ctc.py` for wa2vec2.0 fine-tuning. I made a small change and the hash for the function (which includes tokenisation) is now the same before and after pre-porocessing. With the hash being the same, the caching works as intended.\r\n\r\nBefore:\r\n```\r\n def prepare_dataset(batch):\r\n # load audio\r\n sample = batch[audio_column_name]\r\n\r\n inputs = feature_extractor(sample[\"array\"], sampling_rate=sample[\"sampling_rate\"])\r\n batch[\"input_values\"] = inputs.input_values[0]\r\n batch[\"input_length\"] = len(batch[\"input_values\"])\r\n\r\n # encode targets\r\n additional_kwargs = {}\r\n if phoneme_language is not None:\r\n additional_kwargs[\"phonemizer_lang\"] = phoneme_language\r\n\r\n batch[\"labels\"] = tokenizer(batch[\"target_text\"], **additional_kwargs).input_ids\r\n\r\n return batch\r\n\r\n with training_args.main_process_first(desc=\"dataset map preprocessing\"):\r\n vectorized_datasets = raw_datasets.map(\r\n prepare_dataset,\r\n remove_columns=next(iter(raw_datasets.values())).column_names,\r\n num_proc=num_workers,\r\n desc=\"preprocess datasets\",\r\n )\r\n```\r\nAfter:\r\n```\r\n def prepare_dataset(batch, feature_extractor, tokenizer):\r\n # load audio\r\n sample = batch[audio_column_name]\r\n\r\n inputs = feature_extractor(sample[\"array\"], sampling_rate=sample[\"sampling_rate\"])\r\n batch[\"input_values\"] = inputs.input_values[0]\r\n batch[\"input_length\"] = len(batch[\"input_values\"])\r\n\r\n # encode targets\r\n additional_kwargs = {}\r\n if phoneme_language is not None:\r\n additional_kwargs[\"phonemizer_lang\"] = phoneme_language\r\n\r\n batch[\"labels\"] = tokenizer(batch[\"target_text\"], **additional_kwargs).input_ids\r\n\r\n return batch\r\n\r\n pd = lambda batch: prepare_dataset(batch, feature_extractor, tokenizer)\r\n\r\n with training_args.main_process_first(desc=\"dataset map preprocessing\"):\r\n vectorized_datasets = raw_datasets.map(\r\n pd,\r\n remove_columns=next(iter(raw_datasets.values())).column_names,\r\n num_proc=num_workers,\r\n desc=\"preprocess datasets\",\r\n )\r\n```",
"Not sure why the second one would work and not the first one - they're basically the same with respect to hashing. In both cases the function is hashed recursively, and therefore the feature_extractor and the tokenizer are hashed the same way.\r\n\r\nWith which tokenizer or feature extractor are you experiencing this behavior ?\r\n\r\nDo you also experience this ?\r\n> Tokenization for some subsets are repeated at the 2nd and 3rd run. Starting from the 4th run, everything are loaded from cache.",
"Thanks ! Hopefully this can be useful to others, and also to better understand and improve hashing/caching ",
"`tokenizer.save_pretrained(training_args.output_dir)` produces a different tokenizer hash when loaded on restart of the script. When I was debugging before I was terminating the script prior to this command, then rerunning. \r\n\r\nI compared the tokenizer items on the first and second runs, there are two different items:\r\n1st:\r\n```\r\n('_additional_special_tokens', [AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True)])\r\n\r\n...\r\n\r\n('tokens_trie', <transformers.tokenization_utils.Trie object at 0x7f4d6d0ddb38>)\r\n```\r\n\r\n2nd:\r\n```\r\n('_additional_special_tokens', [AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True)])\r\n\r\n...\r\n\r\n('tokens_trie', <transformers.tokenization_utils.Trie object at 0x7efc23dcce80>)\r\n```\r\n\r\n On every run of this the special tokens are being added on, and the hash is different on the `tokens_trie`. The increase in the special tokens category could be cleaned, but not sure about the hash for the `tokens_trie`. What might work is that the call for the tokenizer encoding can be translated into a function that strips any unnecessary information out, but that's a guess.\r\n",
"Thanks for investigating ! Does that mean that `save_pretrained`() produces non-deterministic tokenizers on disk ? Or is it `from_pretrained()` which is not deterministic given the same files on disk ?\r\n\r\nI think one way to fix this would be to make save/from_pretrained deterministic, or make the pickling of `transformers.tokenization_utils.Trie` objects deterministic (this could be implemented in `transformers`, but maybe let's discuss in an issue in `transformers` before opening a PR)",
"Late to the party but everything should be deterministic (afaik at least).\r\n\r\nBut `Trie` is a simple class object, so afaik it's hash function is linked to its `id(self)` so basically where it's stored in memory, so super highly non deterministic. Could that be the issue ?",
"> But Trie is a simple class object, so afaik it's hash function is linked to its id(self) so basically where it's stored in memory, so super highly non deterministic. Could that be the issue ?\r\n\r\nWe're computing the hash of the pickle dump of the class so it should be fine, as long as the pickle dump is deterministic",
"I've ported wav2vec2.0 fine-tuning into Optimum-Graphcore which is where I found the issue. The majority of the script was copied from the Transformers version to keep it similar, [here is the tokenizer loading section from the source](https://github.com/huggingface/transformers/blob/f0982682bd6fd0b438dda79ec45f3a8fac83a985/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L531).\r\n\r\nIn the last comment I have two loaded tokenizers, one from run 'N' of the script and one from 'N+1'. I think what's happening is that when you add special tokens (e.g. PAD and UNK) another AddedToken object is appended when tokenizer is saved regardless of whether special tokens are there already. \r\n\r\nIf there is a AddedTokens cleanup at load/save this could solve the issue, but then is Trie going to cause hash to be different? I'm not sure. ",
"Which Python version are you using ?\r\n\r\nThe trie is basically a big dict of dics, so deterministic nature depends on python version:\r\nhttps://stackoverflow.com/questions/2053021/is-the-order-of-a-python-dictionary-guaranteed-over-iterations\r\n\r\nMaybe the investigation is actually not finding the right culprit though (the memory id is changed, but `datasets` is not using that to compare, so maybe we need to be looking within `datasets` so see where the comparison fails)",
"Similar issue found on `BartTokenizer`. You can bypass the bug by loading a fresh new tokenizer everytime.\r\n\r\n```\r\n dataset = dataset.map(lambda x: tokenize_func(x, BartTokenizer.from_pretrained(xxx)),\r\n num_proc=num_proc, desc='Tokenize')\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/1800 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1800/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1800/comments | https://api.github.com/repos/huggingface/datasets/issues/1800/events | https://github.com/huggingface/datasets/pull/1800 | 797,798,689 | MDExOlB1bGxSZXF1ZXN0NTY0NzE5MjA3 | 1,800 | Add DuoRC Dataset | [] | closed | false | null | 1 | 2021-01-31T20:01:59Z | 2021-02-03T05:01:45Z | 2021-02-02T22:49:26Z | null | Hi,
DuoRC SelfRC is one type of the [DuoRC Dataset](https://duorc.github.io/). DuoRC SelfRC is a crowdsourced Abstractive/Extractive Question-Answering dataset based on Wikipedia movie plots. It contains examples that may have answers in the movie plot, synthesized answers which are not present in the movie plot, or no answers. I have also added ParaphraseRC - the other type of DuoRC dataset where questions are based on Wikipedia movie plots and answers are based on corresponding IMDb movie plots.
Paper : [https://arxiv.org/abs/1804.07927](https://arxiv.org/abs/1804.07927)
I want to add this to ๐ค datasets to make it more accessible to the community. I have added all the details that I could find. Please let me know if anything else is needed from my end.
Thanks,
Gunjan
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1800/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1800/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1800.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1800",
"merged_at": "2021-02-02T22:49:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1800.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1800"
} | true | [
"Thanks for approving @lhoestq!\r\nWill apply these changes for the other datasets I've added too."
] |
https://api.github.com/repos/huggingface/datasets/issues/874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/874/comments | https://api.github.com/repos/huggingface/datasets/issues/874/events | https://github.com/huggingface/datasets/issues/874 | 748,193,140 | MDU6SXNzdWU3NDgxOTMxNDA= | 874 | trec dataset unavailable | [] | closed | false | null | 2 | 2020-11-22T08:09:36Z | 2020-11-27T13:56:42Z | 2020-11-27T13:56:42Z | null | Hi
when I try to load the trec dataset I am getting these errors, thanks for your help
`datasets.load_dataset("trec", split="train")
`
```
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators
dl_files = dl_manager.download_and_extract(_URLs)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download
num_proc=download_config.num_proc,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested
return function(data_struct)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 477, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/874/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/874/timeline | null | completed | null | null | false | [
"This was fixed in #740 \r\nCould you try to update `datasets` and try again ?",
"This has been fixed in datasets 1.1.3"
] |
https://api.github.com/repos/huggingface/datasets/issues/3744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3744/comments | https://api.github.com/repos/huggingface/datasets/issues/3744/events | https://github.com/huggingface/datasets/issues/3744 | 1,141,461,165 | I_kwDODunzps5ECVCt | 3,744 | Better shards shuffling in streaming mode | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | 0 | 2022-02-17T15:07:21Z | 2022-02-23T15:00:58Z | 2022-02-23T15:00:58Z | null | Sometimes a dataset script has a `_split_generators` that returns several files as well as the corresponding metadata of each file. It often happens that they end up in two separate lists in the `gen_kwargs`:
```python
gen_kwargs = {
"files": [os.path.join(data_dir, filename) for filename in all_files],
"metadata_files": [all_metadata[filename] for filename in all_files],
}
```
It happened for Multilingual Spoken Words for example in #3666
However currently **the two lists are shuffled independently** when shuffling the shards in streaming mode. This leads to `_generate_examples` not having the right metadata for each file.
To prevent this issue I suggest that we always shuffle lists of the same length the exact same way to avoid such a big but silent issue.
cc @polinaeterna | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3744/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3744/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5093 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5093/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5093/comments | https://api.github.com/repos/huggingface/datasets/issues/5093/events | https://github.com/huggingface/datasets/issues/5093 | 1,402,939,660 | I_kwDODunzps5TnykM | 5,093 | Mismatch between tutoriel and doc | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] | closed | false | null | 3 | 2022-10-10T10:23:53Z | 2022-10-10T17:51:15Z | 2022-10-10T17:51:14Z | null | ## Describe the bug
In the "Process text data" tutorial, [`map` has `return_tensors` as kwarg](https://huggingface.co/docs/datasets/main/en/nlp_process#map). It does not seem to appear in the [function documentation](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map), nor to work.
## Steps to reproduce the bug
MWE:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
from datasets import load_dataset
dataset = load_dataset("lhoestq/demo1", split="train")
dataset = dataset.map(lambda examples: tokenizer(examples["review"]), batched=True, return_tensors="pt")
```
## Expected results
return_tensors to be a valid kwarg :smiley:
## Actual results
```python
>> TypeError: map() got an unexpected keyword argument 'return_tensors'
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.14.0-1052-oem-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5093/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5093/timeline | null | completed | null | null | false | [
"Hi, thanks for reporting! This line should be replaced with \r\n```python\r\ndataset = dataset.map(lambda examples: tokenizer(examples[\"text\"], return_tensors=\"np\"), batched=True)\r\n```\r\nfor it to work (the `return_tensors` part inside the `tokenizer` call).",
"Can I work on this?",
"Fixed in https://github.com/huggingface/datasets/pull/5095"
] |
https://api.github.com/repos/huggingface/datasets/issues/3037 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3037/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3037/comments | https://api.github.com/repos/huggingface/datasets/issues/3037/events | https://github.com/huggingface/datasets/pull/3037 | 1,018,091,919 | PR_kwDODunzps4syi15 | 3,037 | SberQuad | [] | closed | false | null | 0 | 2021-10-06T11:21:08Z | 2021-10-06T11:33:08Z | 2021-10-06T11:33:08Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3037/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3037/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3037.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3037",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3037.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3037"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6074/comments | https://api.github.com/repos/huggingface/datasets/issues/6074/events | https://github.com/huggingface/datasets/pull/6074 | 1,822,299,128 | PR_kwDODunzps5Wb8O_ | 6,074 | Misc doc improvements | [] | closed | false | null | 3 | 2023-07-26T12:20:54Z | 2023-07-27T16:16:28Z | 2023-07-27T16:16:02Z | null | Removes the warning about requiring to write a dataset loading script to define multiple configurations, as the README YAML can be used instead (for simple cases). Also, deletes the section about using the `BatchSampler` in `torch<=1.12.1` to speed up loading, as `torch 1.12.1` is over a year old (and `torch 2.0` has been out for a while). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6074/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6074/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6074.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6074",
"merged_at": "2023-07-27T16:16:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6074.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6074"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006616 / 0.011353 (-0.004737) | 0.003915 / 0.011008 (-0.007093) | 0.083271 / 0.038508 (0.044763) | 0.072595 / 0.023109 (0.049485) | 0.307224 / 0.275898 (0.031326) | 0.337244 / 0.323480 (0.013764) | 0.005296 / 0.007986 (-0.002690) | 0.003325 / 0.004328 (-0.001003) | 0.064589 / 0.004250 (0.060339) | 0.056369 / 0.037052 (0.019316) | 0.310829 / 0.258489 (0.052340) | 0.345563 / 0.293841 (0.051722) | 0.030551 / 0.128546 (-0.097995) | 0.008519 / 0.075646 (-0.067127) | 0.286368 / 0.419271 (-0.132903) | 0.052498 / 0.043533 (0.008966) | 0.308735 / 0.255139 (0.053596) | 0.329234 / 0.283200 (0.046034) | 0.022588 / 0.141683 (-0.119095) | 1.453135 / 1.452155 (0.000981) | 1.525956 / 1.492716 (0.033239) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199417 / 0.018006 (0.181410) | 0.454621 / 0.000490 (0.454131) | 0.004928 / 0.000200 (0.004728) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028436 / 0.037411 (-0.008975) | 0.083722 / 0.014526 (0.069196) | 0.095162 / 0.176557 (-0.081395) | 0.153434 / 0.737135 (-0.583702) | 0.099480 / 0.296338 (-0.196859) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384647 / 0.215209 (0.169438) | 3.838406 / 2.077655 (1.760751) | 1.891267 / 1.504120 (0.387148) | 1.751432 / 1.541195 (0.210238) | 1.737443 / 1.468490 (0.268953) | 0.487758 / 4.584777 (-4.097019) | 3.635925 / 3.745712 (-0.109787) | 5.208718 / 5.269862 (-0.061144) | 3.029374 / 4.565676 (-1.536302) | 0.057613 / 0.424275 (-0.366662) | 0.007177 / 0.007607 (-0.000430) | 0.455596 / 0.226044 (0.229552) | 4.559969 / 2.268929 (2.291040) | 2.325321 / 55.444624 (-53.119303) | 2.034924 / 6.876477 (-4.841552) | 2.163869 / 2.142072 (0.021796) | 0.583477 / 4.805227 (-4.221750) | 0.132870 / 6.500664 (-6.367795) | 0.059618 / 0.075469 (-0.015851) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263751 / 1.841788 (-0.578037) | 19.740004 / 8.074308 (11.665696) | 14.410980 / 10.191392 (4.219588) | 0.170367 / 0.680424 (-0.510057) | 0.018225 / 0.534201 (-0.515976) | 0.390101 / 0.579283 (-0.189182) | 0.404298 / 0.434364 (-0.030066) | 0.455295 / 0.540337 (-0.085043) | 0.621179 / 1.386936 (-0.765757) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006580 / 0.011353 (-0.004773) | 0.004078 / 0.011008 (-0.006930) | 0.065842 / 0.038508 (0.027334) | 0.074494 / 0.023109 (0.051385) | 0.403644 / 0.275898 (0.127746) | 0.430204 / 0.323480 (0.106724) | 0.005343 / 0.007986 (-0.002643) | 0.003366 / 0.004328 (-0.000963) | 0.064858 / 0.004250 (0.060607) | 0.056252 / 0.037052 (0.019200) | 0.412556 / 0.258489 (0.154067) | 0.434099 / 0.293841 (0.140258) | 0.031518 / 0.128546 (-0.097028) | 0.008543 / 0.075646 (-0.067104) | 0.071658 / 0.419271 (-0.347613) | 0.049962 / 0.043533 (0.006430) | 0.398511 / 0.255139 (0.143372) | 0.415908 / 0.283200 (0.132708) | 0.025011 / 0.141683 (-0.116672) | 1.492350 / 1.452155 (0.040195) | 1.552996 / 1.492716 (0.060280) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204971 / 0.018006 (0.186964) | 0.439965 / 0.000490 (0.439475) | 0.002071 / 0.000200 (0.001872) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031673 / 0.037411 (-0.005738) | 0.087529 / 0.014526 (0.073004) | 0.099882 / 0.176557 (-0.076675) | 0.156994 / 0.737135 (-0.580141) | 0.101421 / 0.296338 (-0.194918) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407480 / 0.215209 (0.192271) | 4.069123 / 2.077655 (1.991468) | 2.081288 / 1.504120 (0.577169) | 1.920367 / 1.541195 (0.379172) | 1.981053 / 1.468490 (0.512563) | 0.481995 / 4.584777 (-4.102782) | 3.546486 / 3.745712 (-0.199226) | 5.133150 / 5.269862 (-0.136712) | 3.056444 / 4.565676 (-1.509232) | 0.056650 / 0.424275 (-0.367625) | 0.007746 / 0.007607 (0.000139) | 0.490891 / 0.226044 (0.264847) | 4.902160 / 2.268929 (2.633232) | 2.564726 / 55.444624 (-52.879899) | 2.234988 / 6.876477 (-4.641489) | 2.387656 / 2.142072 (0.245583) | 0.576315 / 4.805227 (-4.228912) | 0.132065 / 6.500664 (-6.368599) | 0.060728 / 0.075469 (-0.014741) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.370568 / 1.841788 (-0.471220) | 19.883159 / 8.074308 (11.808851) | 14.442066 / 10.191392 (4.250674) | 0.150119 / 0.680424 (-0.530305) | 0.018359 / 0.534201 (-0.515842) | 0.394128 / 0.579283 (-0.185155) | 0.411697 / 0.434364 (-0.022667) | 0.460580 / 0.540337 (-0.079757) | 0.608490 / 1.386936 (-0.778446) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"merging now if you don't mind - this way I can make a patch release"
] |
https://api.github.com/repos/huggingface/datasets/issues/183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/183/comments | https://api.github.com/repos/huggingface/datasets/issues/183/events | https://github.com/huggingface/datasets/issues/183 | 623,054,270 | MDU6SXNzdWU2MjMwNTQyNzA= | 183 | [Bug] labels of glue/ax are all -1 | [] | closed | false | null | 2 | 2020-05-22T08:43:36Z | 2020-05-22T22:14:05Z | 2020-05-22T22:14:05Z | null | ```
ax = nlp.load_dataset('glue', 'ax')
for i in range(30): print(ax['test'][i]['label'], end=', ')
```
```
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/183/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/183/timeline | null | completed | null | null | false | [
"This is the test set given by the Glue benchmark. The labels are not provided, and therefore set to -1.",
"Ah, yeah. Why it didnโt occur to me. ๐\nThank you for your comment."
] |
https://api.github.com/repos/huggingface/datasets/issues/4348 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4348/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4348/comments | https://api.github.com/repos/huggingface/datasets/issues/4348/events | https://github.com/huggingface/datasets/issues/4348 | 1,235,432,976 | I_kwDODunzps5JozYQ | 4,348 | `inspect` functions can't fetch dataset script from the Hub | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-05-13T16:08:26Z | 2022-06-09T10:26:06Z | 2022-06-09T10:26:06Z | null | The `inspect_dataset` and `inspect_metric` functions are unable to retrieve a dataset or metric script from the Hub and store it locally at the specified `local_path`:
```py
>>> from datasets import inspect_dataset
>>> inspect_dataset('rotten_tomatoes', local_path='path/to/my/local/folder')
FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory.
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4348/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4348/timeline | null | completed | null | null | false | [
"Hi, thanks for reporting! `git bisect` points to #2986 as the PR that introduced the bug. Since then, there have been some additional changes to the loading logic, and in the current state, `force_local_path` (set via `local_path`) forbids pulling a script from the internet instead of downloading it: https://github.com/huggingface/datasets/blob/cfae0545b2ba05452e16136cacc7d370b4b186a1/src/datasets/inspect.py#L89-L91\r\n\r\ncc @lhoestq: `force_local_path` is only used in `inspect_dataset` and `inspect_metric`. Is it OK if we revert the behavior to match the old one?",
"Good catch ! Yea I think it's fine :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/2651 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2651/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2651/comments | https://api.github.com/repos/huggingface/datasets/issues/2651/events | https://github.com/huggingface/datasets/issues/2651 | 944,796,961 | MDU6SXNzdWU5NDQ3OTY5NjE= | 2,651 | Setting log level higher than warning does not suppress progress bar | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 7 | 2021-07-14T21:06:51Z | 2022-07-08T14:51:57Z | 2021-07-15T03:41:35Z | null | ## Describe the bug
I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well).
According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't suppress it with version 1.9.0.
I also tried to set `DATASETS_VERBOSITY` environment variable to `error` or `critical` but it also didn't work.
## Steps to reproduce the bug
```python
import datasets
from datasets.utils.logging import set_verbosity_error
set_verbosity_error()
def dummy_map(batch):
return batch
common_voice_train = datasets.load_dataset("common_voice", "de", split="train")
common_voice_test = datasets.load_dataset("common_voice", "de", split="test")
common_voice_train.map(dummy_map)
```
## Expected results
- The progress bar for `.map` call won't be shown
## Actual results
- The progress bar for `.map` is still shown
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Linux-5.4.0-1045-aws-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.5
- PyArrow version: 4.0.1
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2651/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2651/timeline | null | completed | null | null | false | [
"Hi,\r\n\r\nyou can suppress progress bars by patching logging as follows:\r\n```python\r\nimport datasets\r\nimport logging\r\ndatasets.logging.get_verbosity = lambda: logging.NOTSET\r\n# map call ...\r\n```\r\nEDIT: now you have to use `disable_progress_bar `",
"Thank you, it worked :)",
"See https://github.com/huggingface/datasets/issues/2528 for reference",
"Note also that you can disable the progress bar with\r\n\r\n```python\r\nfrom datasets.utils import disable_progress_bar\r\ndisable_progress_bar()\r\n```\r\n\r\nSee https://github.com/huggingface/datasets/blob/8814b393984c1c2e1800ba370de2a9f7c8644908/src/datasets/utils/tqdm_utils.py#L84",
"Now the library officially recommends `set_progress_bar_enabled(False)`\r\n\r\n```py\r\nfrom datasets.utils import set_progress_bar_enabled\r\n\r\nset_progress_bar_enabled(False)\r\n```\r\n\r\nsource:\r\n\r\nhttps://github.com/huggingface/datasets/blob/1fd47120ace13626c528367787ffa13e1a26e6c0/src/datasets/utils/tqdm_utils.py#L83-L88\r\n\r\n",
"From https://github.com/huggingface/datasets/pull/3897, `disable_progress_bar` is the function you should use",
"Now ``disable_progress_bar`` function is in ``datasets/src/datasets/utils/logging.py``.\r\nhttps://github.com/huggingface/datasets/blob/aa555a299ad73c65e3f997a764e9d211675ab05d/src/datasets/utils/logging.py#L233-L236\r\nAnd the method mentioned in https://github.com/huggingface/datasets/issues/2651#issuecomment-880270774 is not working now."
] |
https://api.github.com/repos/huggingface/datasets/issues/5884 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5884/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5884/comments | https://api.github.com/repos/huggingface/datasets/issues/5884/events | https://github.com/huggingface/datasets/issues/5884 | 1,719,548,172 | I_kwDODunzps5mfjkM | 5,884 | `Dataset.to_tf_dataset` fails when strings cannot be encoded as `np.bytes_` | [] | closed | false | null | 2 | 2023-05-22T12:03:06Z | 2023-06-09T16:04:56Z | 2023-06-09T16:04:55Z | null | ### Describe the bug
When loading any dataset that contains a column with strings that are not ASCII-compatible, looping over those records raises the following exception e.g. for `รฉ` character `UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 0: ordinal not in range(128)`.
### Steps to reproduce the bug
Running the following script will eventually fail, when reaching to the batch that contains non-ASCII compatible strings.
```python
from datasets import load_dataset
ds = load_dataset("imdb", split="train")
tfds = ds.to_tf_dataset(batch_size=16)
for batch in tfds:
print(batch)
>>> UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 0: ordinal not in range(128)
```
### Expected behavior
The following script to run properly, making sure that the strings are either `numpy.unicode_` or `numpy.string` instead of `numpy.bytes_` since some characters are not ASCII compatible and that would lead to an issue when applying the `map`.
```python
from datasets import load_dataset
ds = load_dataset("imdb", split="train")
tfds = ds.to_tf_dataset(batch_size=16)
for batch in tfds:
print(batch)
```
### Environment info
- `datasets` version: 2.12.1.dev0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5884/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5884/timeline | null | completed | null | null | false | [
"May eventually be solved in #5883 ",
"#self-assign"
] |
https://api.github.com/repos/huggingface/datasets/issues/5631 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5631/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5631/comments | https://api.github.com/repos/huggingface/datasets/issues/5631/events | https://github.com/huggingface/datasets/issues/5631 | 1,620,442,854 | I_kwDODunzps5glf7m | 5,631 | Custom split names | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 1 | 2023-03-12T17:21:43Z | 2023-03-24T14:13:00Z | 2023-03-24T14:13:00Z | null | ### Feature request
Hi,
I participated in multiple NLP tasks where there are more than just train, test, validation splits, there could be multiple validation sets or test sets. But it seems currently only those mentioned three splits supported. It would be nice to have the support for more splits on the hub. (currently i can have more splits when I am loading datasets from urls, but not hub)
### Motivation
Easier access to more splits
### Your contribution
No | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5631/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5631/timeline | null | completed | null | null | false | [
"Hi!\r\n\r\nYou can also use names other than \"train\", \"validation\" and \"test\". As an example, check the [script](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/blob/e095840f23f3dffc1056c078c2f9320dad9ca74d/common_voice_11_0.py#L139) of the Common Voice 11 dataset. "
] |
https://api.github.com/repos/huggingface/datasets/issues/2999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2999/comments | https://api.github.com/repos/huggingface/datasets/issues/2999/events | https://github.com/huggingface/datasets/pull/2999 | 1,013,536,933 | PR_kwDODunzps4skgCm | 2,999 | Set trivia_qa writer batch size | [] | closed | false | null | 0 | 2021-10-01T16:23:26Z | 2021-10-01T16:34:55Z | 2021-10-01T16:34:55Z | null | Save some RAM when generating trivia_qa | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2999/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2999/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2999.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2999",
"merged_at": "2021-10-01T16:34:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2999.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2999"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3399/comments | https://api.github.com/repos/huggingface/datasets/issues/3399/events | https://github.com/huggingface/datasets/issues/3399 | 1,073,593,861 | I_kwDODunzps4__b4F | 3,399 | Add Wikisource dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 1 | 2021-12-07T17:21:31Z | 2021-12-10T17:26:26Z | null | null | ## Adding a Dataset
- **Name:** *wikisource*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** Additional high quality textual data, besides Wikipedia.
Add loading script as "canonical" dataset (as it is the case for ""wikipedia").
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
CC: @geohci, @yjernite | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3399/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3399/timeline | null | null | null | null | false | [
"See notebook by @geohci: https://public.paws.wmcloud.org/User:Isaac_(WMF)/HuggingFace%20Wikisource%20Processing.ipynb"
] |
https://api.github.com/repos/huggingface/datasets/issues/4632 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4632/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4632/comments | https://api.github.com/repos/huggingface/datasets/issues/4632/events | https://github.com/huggingface/datasets/issues/4632 | 1,294,166,880 | I_kwDODunzps5NI2tg | 4,632 | 'sort' method sorts one column only | [] | closed | false | null | 3 | 2022-07-05T11:25:26Z | 2023-07-25T15:04:27Z | 2023-07-25T15:04:27Z | null | The 'sort' method changes the order of one column only (the one defined by the argument 'column'), thus creating a mismatch between a sample fields. I would expect it to change the order of the samples as a whole, based on the 'column' order. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4632/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4632/timeline | null | completed | null | null | false | [
"Hi ! `ds.sort()` does sort the full dataset, not just one column:\r\n```python\r\nfrom datasets import *\r\n\r\nds = Dataset.from_dict({\"foo\": [3, 2, 1], \"bar\": [\"c\", \"b\", \"a\"]})\r\nprint(d.sort(\"foo\").to_pandas()\r\n# foo bar\r\n# 0 1 a\r\n# 1 2 b\r\n# 2 3 c\r\n```\r\n\r\nWhat made you think it was not the case ? Did you experience a situation where it was only sorting one column ?",
"Hi! thank you for your quick reply!\r\nI wanted to sort the `cnn_dailymail` dataset by the length of the labels (num of characters). I added a new column to the dataset (`ds.add_column`) with the lengths and then sorted by this new column. Only the new length column was sorted, the reset left in their original order. ",
"That's unexpected, can you share the code you used to get this ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/302 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/302/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/302/comments | https://api.github.com/repos/huggingface/datasets/issues/302/events | https://github.com/huggingface/datasets/issues/302 | 643,910,418 | MDU6SXNzdWU2NDM5MTA0MTg= | 302 | Question - Sign Language Datasets | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | closed | false | null | 3 | 2020-06-23T14:53:40Z | 2020-11-25T11:25:33Z | 2020-11-25T11:25:33Z | null | An emerging field in NLP is SLP - sign language processing.
I was wondering about adding datasets here, specifically because it's shaping up to be large and easily usable.
The metrics for sign language to text translation are the same.
So, what do you think about (me, or others) adding datasets here?
An example dataset would be [RWTH-PHOENIX-Weather 2014 T](https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/)
For every item in the dataset, the data object includes:
1. video_path - path to mp4 file
2. pose_path - a path to `.pose` file with human pose landmarks
3. openpose_path - a path to a `.json` file with human pose landmarks
4. gloss - string
5. text - string
6. video_metadata - height, width, frames, framerate
------
To make it a tad more complicated - what if sign language libraries add requirements to `nlp`? for example, sign language is commonly annotated using `ilex`, `eaf`, or `srt` files, which are all loadable as text, but there is no reason for the dataset to parse that file by itself, if libraries exist to do so. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/302/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/302/timeline | null | completed | null | null | false | [
"Even more complicating - \r\n\r\nAs I see it, datasets can have \"addons\".\r\nFor example, the WebNLG dataset is a dataset for data-to-text. However, a work of mine and other works enriched this dataset with text plans / underlying text structures. In that case, I see a need to load the dataset \"WebNLG\" with \"plans\" addon.\r\n\r\nSame for sign language - if there is a dataset of videos, one addon can be to run OpenPose, another to run ARKit4 pose estimation, and another to run PoseNet, or even just a video embedding addon. (which are expensive to run individually for everyone who wants to use these data)\r\n\r\nThis is something I dabbled with my own implementation to a [research datasets library](https://github.com/AmitMY/meta-scholar/) and I love to get the discussion going on these topics.",
"This is a really cool idea !\r\nThe example for data objects you gave for the RWTH-PHOENIX-Weather 2014 T dataset can totally fit inside the library.\r\n\r\nFor your point about formats like `ilex`, `eaf`, or `srt`, it is possible to use any library in your dataset script.\r\nHowever most user probably won't need these libraries, as most datasets don't need them, and therefore it's unlikely that we will have them in the minimum requirements to use `nlp` (we want to keep it as light-weight as possible). If a user wants to load your dataset and doesn't have the libraries you need, an error is raised asking the user to install them.\r\n\r\nMore generally, we plan to have something like a `requirements.txt` per dataset. This could also be a place for addons as you said. What do you think ?",
"Thanks, Quentin, I think a `requirements.txt` per dataset will be a good thing.\r\nI will work on adding this dataset next week, and once we sort all of the kinks, I'll add more."
] |
https://api.github.com/repos/huggingface/datasets/issues/1847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1847/comments | https://api.github.com/repos/huggingface/datasets/issues/1847/events | https://github.com/huggingface/datasets/pull/1847 | 803,824,694 | MDExOlB1bGxSZXF1ZXN0NTY5Njg4NDY0 | 1,847 | [Metrics] Add word error metric metric | [] | closed | false | null | 1 | 2021-02-08T18:41:15Z | 2021-02-09T17:53:21Z | 2021-02-09T17:53:21Z | null | This PR adds the word error rate metric to datasets.
WER: https://en.wikipedia.org/wiki/Word_error_rate
for speech recognition. WER is the main metric used in ASR.
`jiwer` seems to be a solid library (see https://github.com/asteroid-team/asteroid/pull/329#discussion_r525158939) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1847/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1847/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1847.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1847",
"merged_at": "2021-02-09T17:53:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1847.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1847"
} | true | [
"Feel free to merge once the CI is all green ;)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4545 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4545/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4545/comments | https://api.github.com/repos/huggingface/datasets/issues/4545/events | https://github.com/huggingface/datasets/pull/4545 | 1,280,899,028 | PR_kwDODunzps46KV-y | 4,545 | Make DuplicateKeysError more user friendly [For Issue #2556] | [] | closed | false | null | 2 | 2022-06-22T21:01:34Z | 2022-06-28T09:37:06Z | 2022-06-28T09:26:04Z | null | # What does this PR do?
## Summary
*DuplicateKeysError error does not provide any information regarding the examples which have the same the key.*
*This information is very helpful for debugging the dataset generator script.*
## Additions
-
## Changes
- Changed `DuplicateKeysError Class` in `src/datasets/keyhash.py` to add current index and duplicate_key_indices to error message.
- Changed `check_duplicate_keys` function in `src/datasets/arrow_writer.py` to find indices of examples with duplicate hash if duplicate keys are found.
## Deletions
-
## To do :
- [x] Find way to find and print path `<Path to Dataset>` in Error message
## Issues Addressed :
Fixes #2556 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4545/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4545/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4545.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4545",
"merged_at": "2022-06-28T09:26:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4545.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4545"
} | true | [
"> Nice thanks !\r\n> \r\n> After your changes feel free to mark this PR as \"ready for review\" ;)\r\n\r\nMarking PR ready for review.\r\n\r\n@lhoestq Let me know if there is anything else required or if we are good to go ahead and merge.",
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/837/comments | https://api.github.com/repos/huggingface/datasets/issues/837/events | https://github.com/huggingface/datasets/pull/837 | 740,250,215 | MDExOlB1bGxSZXF1ZXN0NTE4NzcwNDM5 | 837 | AlloCinรฉ dataset card | [] | closed | false | null | 0 | 2020-11-10T21:19:53Z | 2020-11-25T21:56:27Z | 2020-11-25T21:56:27Z | null | Link to the card page: https://github.com/mcmillanmajora/datasets/blob/allocine_card/datasets/allocine/README.md
There wasn't as much information available for this dataset, so I'm wondering what's the best way to address open questions about the dataset. For example, where did the list of films that the dataset creator used come from?
I'm also wondering how best to go about talking about limitations when so little is known about the data. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/837/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/837/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/837.diff",
"html_url": "https://github.com/huggingface/datasets/pull/837",
"merged_at": "2020-11-25T21:56:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/837.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/837"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1249 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1249/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1249/comments | https://api.github.com/repos/huggingface/datasets/issues/1249/events | https://github.com/huggingface/datasets/pull/1249 | 758,472,863 | MDExOlB1bGxSZXF1ZXN0NTMzNjQwNjA1 | 1,249 | Add doc2dial dataset | [] | closed | false | null | 2 | 2020-12-07T12:39:09Z | 2020-12-14T16:17:14Z | 2020-12-14T16:17:14Z | null | ### Doc2dial: A Goal-Oriented Document-Grounded Dialogue Dataset v0.9
Once complete this will add the [Doc2dial](https://doc2dial.github.io/data.html) dataset from the generic data sets list. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1249/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1249/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1249.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1249",
"merged_at": "2020-12-14T16:17:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1249.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1249"
} | true | [
"It not always practical to use nested `Sequence`. If you have troubles with sequence you can use lists instead. \r\n\r\nFor example\r\n```python\r\n\r\nfeatures=datasets.Features(\r\n {\r\n \"dial_id\": datasets.Value(\"string\"),\r\n \"doc_id\": datasets.Value(\"string\"),\r\n \"domain\": datasets.Value(\"string\"),\r\n \"turns\": [\r\n {\r\n \"turn_id\": datasets.Value(\"int32\"),\r\n \"role\": datasets.Value(\"string\"),\r\n \"da\": datasets.Value(\"string\"),\r\n \"reference\": [\r\n {\r\n \"keys\" : datasets.Value(\"string\"),\r\n \"values\": datasets.Value(\"string\"), \r\n }\r\n\r\n ],\r\n \"utterance\": datasets.Value(\"string\"),\r\n }\r\n ],\r\n }\r\n),\r\n```\r\n\r\nthis way `turns` will be a list of dict, and the \"reference\" key of `turns` will be a list of dict as well",
"No problem thanks for all your help getting this to the final stages! Added .gitignore, removed .lock and applied the changes you asked for."
] |
https://api.github.com/repos/huggingface/datasets/issues/5576 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5576/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5576/comments | https://api.github.com/repos/huggingface/datasets/issues/5576/events | https://github.com/huggingface/datasets/issues/5576 | 1,598,582,744 | I_kwDODunzps5fSG_Y | 5,576 | I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers. | [] | closed | false | null | 1 | 2023-02-24T12:57:49Z | 2023-02-24T12:58:31Z | 2023-02-24T12:58:18Z | null | I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers.
I worked around this by downloading the `the_pile_openwebtext2.py` and editing it to use local files and drop reddit scores as a column (not needed for my purposes).
_Originally posted by @tc-wolf in https://github.com/huggingface/datasets/issues/3053#issuecomment-1281392422_
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5576/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5576/timeline | null | not_planned | null | null | false | [
"Duplicated issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/3076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3076/comments | https://api.github.com/repos/huggingface/datasets/issues/3076/events | https://github.com/huggingface/datasets/issues/3076 | 1,026,113,484 | I_kwDODunzps49KT_M | 3,076 | Error when loading a metric | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-10-14T08:29:27Z | 2021-10-14T09:14:55Z | 2021-10-14T09:14:55Z | null | ## Describe the bug
As reported by @sgugger, after last release, exception is thrown when loading a metric.
## Steps to reproduce the bug
```python
from datasets import load_metric
metric = load_metric("squad_v2")
```
## Actual results
```
FileNotFoundError Traceback (most recent call last)
<ipython-input-1-e612a8cab787> in <module>
1 from datasets import load_metric
----> 2 metric = load_metric("squad_v2")
d:\projects\huggingface\datasets\src\datasets\load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, script_version, **metric_init_kwargs)
1336 )
1337 revision = script_version
-> 1338 metric_module = metric_module_factory(
1339 path, revision=revision, download_config=download_config, download_mode=download_mode
1340 ).module_path
d:\projects\huggingface\datasets\src\datasets\load.py in metric_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, **download_kwargs)
1237 if not isinstance(e1, FileNotFoundError):
1238 raise e1 from None
-> 1239 raise FileNotFoundError(
1240 f"Couldn't find a metric script at {relative_to_absolute_path(combined_path)}. "
1241 f"Metric '{path}' doesn't exist on the Hugging Face Hub either."
FileNotFoundError: Couldn't find a metric script at D:\projects\huggingface\datasets\squad_v2\squad_v2.py. Metric 'squad_v2' doesn't exist on the Hugging Face Hub either.
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3076/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3076/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/385 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/385/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/385/comments | https://api.github.com/repos/huggingface/datasets/issues/385/events | https://github.com/huggingface/datasets/pull/385 | 655,663,997 | MDExOlB1bGxSZXF1ZXN0NDQ4MTAzMjY5 | 385 | Remove unnecessary nested dict | [] | closed | false | null | 5 | 2020-07-13T08:46:23Z | 2020-07-15T11:27:38Z | 2020-07-15T10:03:53Z | null | This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated:
- MLQA
- RACE
Will be adding more if necessary.
#378 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/385/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/385/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/385.diff",
"html_url": "https://github.com/huggingface/datasets/pull/385",
"merged_at": "2020-07-15T10:03:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/385.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/385"
} | true | [
"We can probably scan the dataset scripts with a regexpr to try to identify this pattern cc @patrickvonplaten maybe",
"@mariamabarham This script should work. I tested it for a couple of datasets. There might be exceptions where the script breaks - did not test everything.\r\n\r\n```python\r\n#!/usr/bin/env python3\r\n\r\nfrom nlp import prepare_module, DownloadConfig, import_main_class, hf_api\r\nimport tempfile\r\n\r\n\r\ndef scan_for_nested_unnecessary_dict(dataset_name):\r\n\r\n def load_builder_class(dataset_name):\r\n module_path = prepare_module(dataset_name, download_config=DownloadConfig(force_download=True))\r\n return import_main_class(module_path)\r\n\r\n def load_configs(dataset_name):\r\n builder_cls = load_builder_class(dataset_name)\r\n if len(builder_cls.BUILDER_CONFIGS) == 0:\r\n return [None]\r\n return builder_cls.BUILDER_CONFIGS\r\n\r\n def scan_features_for_nested_dict(features):\r\n is_sequence = False\r\n if hasattr(features, \"_type\"):\r\n if features._type != 'Sequence':\r\n return False\r\n else:\r\n is_sequence = True\r\n features = features.feature\r\n\r\n if isinstance(features, list):\r\n for value in features:\r\n if scan_features_for_nested_dict(value):\r\n return True\r\n return False\r\n\r\n elif isinstance(features, dict):\r\n for key, value in features.items():\r\n if is_sequence and len(features.keys()) == 1 and hasattr(features[key], \"_type\") and features[key]._type != \"Sequence\":\r\n return True\r\n if scan_features_for_nested_dict(value):\r\n return True\r\n return False\r\n elif hasattr(features, \"_type\"):\r\n return False\r\n else:\r\n raise ValueError(f\"{features} should be either a list, a dict or a feature\")\r\n\r\n configs = load_configs(dataset_name)\r\n\r\n for config in configs:\r\n with tempfile.TemporaryDirectory() as processed_temp_dir:\r\n # create config and dataset\r\n dataset_builder_cls = load_builder_class(dataset_name)\r\n name = config.name if config is not None else None\r\n dataset_builder = dataset_builder_cls(name=name, cache_dir=processed_temp_dir)\r\n\r\n is_nested_dict_in_dataset = scan_features_for_nested_dict(dataset_builder._info().features)\r\n if is_nested_dict_in_dataset:\r\n print(f\"{dataset_name} with {name} needs refactoring\")\r\n\r\n\r\nif __name__ == \"__main__\":\r\n scan_for_nested_unnecessary_dict(\"race\") # prints True\r\n scan_for_nested_unnecessary_dict(\"mlqa\") # prints True\r\n scan_for_nested_unnecessary_dict(\"squad\") # prints Nothing\r\n\r\n # ran the following lines for 1min and seems to work -> didn't check for all datasets though\r\n# api = hf_api.HfApi()\r\n# all_datasets = [x.id for x in api.dataset_list(with_community_datasets=False)]\r\n# for dataset in all_datasets:\r\n# scan_for_nested_unnecessary_dict(dataset)\r\n```",
"> @mariamabarham This script should work. I tested it for a couple of datasets. There might be exceptions where the script breaks - did not test everything.\r\n> \r\n> ```python\r\n> #!/usr/bin/env python3\r\n> \r\n> from nlp import prepare_module, DownloadConfig, import_main_class, hf_api\r\n> import tempfile\r\n> \r\n> \r\n> def scan_for_nested_unnecessary_dict(dataset_name):\r\n> \r\n> def load_builder_class(dataset_name):\r\n> module_path = prepare_module(dataset_name, download_config=DownloadConfig(force_download=True))\r\n> return import_main_class(module_path)\r\n> \r\n> def load_configs(dataset_name):\r\n> builder_cls = load_builder_class(dataset_name)\r\n> if len(builder_cls.BUILDER_CONFIGS) == 0:\r\n> return [None]\r\n> return builder_cls.BUILDER_CONFIGS\r\n> \r\n> def scan_features_for_nested_dict(features):\r\n> is_sequence = False\r\n> if hasattr(features, \"_type\"):\r\n> if features._type != 'Sequence':\r\n> return False\r\n> else:\r\n> is_sequence = True\r\n> features = features.feature\r\n> \r\n> if isinstance(features, list):\r\n> for value in features:\r\n> if scan_features_for_nested_dict(value):\r\n> return True\r\n> return False\r\n> \r\n> elif isinstance(features, dict):\r\n> for key, value in features.items():\r\n> if is_sequence and len(features.keys()) == 1 and hasattr(features[key], \"_type\") and features[key]._type != \"Sequence\":\r\n> return True\r\n> if scan_features_for_nested_dict(value):\r\n> return True\r\n> return False\r\n> else:\r\n> raise ValueError(f\"{features} should be either a list of a dict\")\r\n> \r\n> configs = load_configs(dataset_name)\r\n> \r\n> for config in configs:\r\n> with tempfile.TemporaryDirectory() as processed_temp_dir:\r\n> # create config and dataset\r\n> dataset_builder_cls = load_builder_class(dataset_name)\r\n> name = config.name if config is not None else None\r\n> dataset_builder = dataset_builder_cls(name=name, cache_dir=processed_temp_dir)\r\n> \r\n> is_nested_dict_in_dataset = scan_features_for_nested_dict(dataset_builder._info().features)\r\n> if is_nested_dict_in_dataset:\r\n> print(f\"{dataset_name} with {name} needs refactoring\")\r\n> \r\n> \r\n> if __name__ == \"__main__\":\r\n> scan_for_nested_unnecessary_dict(\"race\") # prints True\r\n> scan_for_nested_unnecessary_dict(\"mlqa\") # prints True\r\n> scan_for_nested_unnecessary_dict(\"squad\") # prints Nothing\r\n> \r\n> # ran the following lines for 1min and seems to work -> didn't check for all datasets though\r\n> # api = hf_api.HfApi()\r\n> # all_datasets = [x.id for x in api.dataset_list(with_community_datasets=False)]\r\n> # for dataset in all_datasets:\r\n> # scan_for_nested_unnecessary_dict(dataset)\r\n> ```\r\n\r\nGreat, I will try it",
"I'm not sure the work on this PR was finished @lhoestq cc @mariamabarham @patrickvonplaten ",
"Sorry for that, apparently there are other datasets that could have unnecessary nested dicts.\r\nWe can have another PR to scan and fix the other datasets.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2769 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2769/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2769/comments | https://api.github.com/repos/huggingface/datasets/issues/2769/events | https://github.com/huggingface/datasets/pull/2769 | 963,240,802 | MDExOlB1bGxSZXF1ZXN0NzA1ODk5MTYy | 2,769 | Allow PyArrow from source | [] | closed | false | null | 0 | 2021-08-07T14:26:44Z | 2021-08-09T15:38:39Z | 2021-08-09T15:38:39Z | null | When installing pyarrow from source the version is:
```python
>>> import pyarrow; pyarrow.__version__
'2.1.0.dev612'
```
-> however this breaks the install check at init of `datasets`. This PR makes sure that everything coming after the last `'.'` is removed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2769/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2769/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2769.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2769",
"merged_at": "2021-08-09T15:38:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2769.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2769"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2273 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2273/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2273/comments | https://api.github.com/repos/huggingface/datasets/issues/2273/events | https://github.com/huggingface/datasets/pull/2273 | 869,046,290 | MDExOlB1bGxSZXF1ZXN0NjI0NDcxODc1 | 2,273 | Added CUAD metrics | [] | closed | false | null | 0 | 2021-04-27T16:49:12Z | 2021-04-29T13:59:47Z | 2021-04-29T13:59:47Z | null | `EM`, `F1`, `AUPR`, `Precision@80%Recall`, and `Precision@90%Recall` metrics supported for CUAD | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2273/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2273/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2273.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2273",
"merged_at": "2021-04-29T13:59:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2273.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2273"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/948/comments | https://api.github.com/repos/huggingface/datasets/issues/948/events | https://github.com/huggingface/datasets/pull/948 | 754,306,260 | MDExOlB1bGxSZXF1ZXN0NTMwMjI4NjQz | 948 | docs(ADD_NEW_DATASET): correct indentation for script | [] | closed | false | null | 0 | 2020-12-01T11:17:38Z | 2020-12-01T11:25:18Z | 2020-12-01T11:25:18Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/948/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/948/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/948.diff",
"html_url": "https://github.com/huggingface/datasets/pull/948",
"merged_at": "2020-12-01T11:25:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/948.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/948"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/575 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/575/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/575/comments | https://api.github.com/repos/huggingface/datasets/issues/575/events | https://github.com/huggingface/datasets/issues/575 | 693,691,611 | MDU6SXNzdWU2OTM2OTE2MTE= | 575 | Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading. | [] | closed | false | null | 6 | 2020-09-04T21:46:25Z | 2020-09-22T10:41:36Z | 2020-09-22T10:41:36Z | null | Hi,
I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset:
```
>>> from nlp import load_dataset
>>> dataset = load_dataset('glue', 'mrpc', split='train')
```
However, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the last few lines):
```
/net/vaosl01/opt/NFS/su0/miniconda3/envs/hf/lib/python3.7/site-packages/nlp/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only)
354 " to False."
355 )
--> 356 raise ConnectionError("Couldn't reach {}".format(url))
357
358 # From now on, connected is True.
ConnectionError: Couldn't reach https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-48f4-b431-7480817f1adc
```
I tried glue with cola and sst2. I got the same error, just instead of mrpc in the URL, it was replaced with cola and sst2.
Since this was not working, I thought I'll try another dataset. So I tried downloading the imdb dataset:
```
ds = load_dataset('imdb', split='train')
```
This downloads the data, but it just blocks after that:
```
Downloading: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 4.56k/4.56k [00:00<00:00, 1.38MB/s]
Downloading: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2.07k/2.07k [00:00<00:00, 1.15MB/s]
Downloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown sizetotal: 207.28 MiB) to /net/vaosl01/opt/NFS/su0/huggingface/datasets/imdb/plain_text/1.0.0/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743...
Downloading: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 84.1M/84.1M [00:07<00:00, 11.1MB/s]
```
I checked the folder `$HF_HOME/datasets/downloads/extracted/<id>/aclImdb`. This folder is constantly growing in size. When I navigated to the train folder within, there was no file. However, the test folder seemed to be populating. The last time I checked it was 327M. I thought the Imdb dataset was smaller than that. My questions are:
1. Why is it still blocking? Is it still downloading?
2. I specified split as train, so why is the test folder being populated?
3. I read somewhere that after downloading, `nlp` converts the text files into some sort of `arrow` files, which will also take a while. Is this also happening here?
Thanks.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/575/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/575/timeline | null | completed | null | null | false | [
"Update:\r\n\r\nThe imdb download completed after a long time (about 45 mins). Ofcourse once download loading was instantaneous. Also, the loaded object was of type `arrow_dataset`. \r\n\r\nThe urls for glue still doesn't work though.",
"Thanks for the report, I'll give a look!",
"I am also seeing a similar error when running the following:\r\n\r\n```\r\nimport nlp\r\ndataset = load_dataset('cola')\r\n```\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/js11133/.conda/envs/jiant/lib/python3.8/site-packages/nlp/load.py\", line 509, in load_dataset\r\n module_path = prepare_module(path, download_config=download_config, dataset=True)\r\n File \"/home/js11133/.conda/envs/jiant/lib/python3.8/site-packages/nlp/load.py\", line 248, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"/home/js11133/.conda/envs/jiant/lib/python3.8/site-packages/nlp/utils/file_utils.py\", line 191, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/js11133/.conda/envs/jiant/lib/python3.8/site-packages/nlp/utils/file_utils.py\", line 356, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cola/cola.py\r\n```",
"@jeswan `\"cola\"` is not a valid dataset identifier (you can check the up-to-date list on https://huggingface.co/datasets) but you can find cola inside glue.",
"Ah right. Thanks!",
"Hi. Closing this one since #626 updated the glue urls.\r\n\r\n> 1. Why is it still blocking? Is it still downloading?\r\n\r\nAfter downloading it generates the arrow file by iterating through the examples.\r\nThe number of examples processed by second is shown during the processing (not sure why it was not the case for you)\r\n\r\n> 2. I specified split as train, so why is the test folder being populated?\r\n\r\nIt downloads every split\r\n\r\n\r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5492/comments | https://api.github.com/repos/huggingface/datasets/issues/5492/events | https://github.com/huggingface/datasets/issues/5492 | 1,566,604,216 | I_kwDODunzps5dYHu4 | 5,492 | Push_to_hub in a pull request | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | open | false | null | 2 | 2023-02-01T18:32:14Z | 2023-02-14T22:16:40Z | null | null | Right now `ds.push_to_hub()` can push a dataset on `main` or on a new branch with `branch=`, but there is no way to open a pull request. Even passing `branch=refs/pr/x` doesn't seem to work: it tries to create a branch with that name
cc @nateraw
It should be possible to tweak the use of `huggingface_hub` in `push_to_hub` to make it open a PR or push to an existing PR | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5492/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5492/timeline | null | null | null | null | false | [
"Assigned to myself and will get to it in the next week, but if someone finds this issue annoying and wants to submit a PR before I do, just ping me here and I'll reassign :). ",
"I would like to be assigned to this issue, @nateraw . #self-assign"
] |
https://api.github.com/repos/huggingface/datasets/issues/3794 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3794/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3794/comments | https://api.github.com/repos/huggingface/datasets/issues/3794/events | https://github.com/huggingface/datasets/pull/3794 | 1,153,185,343 | PR_kwDODunzps4zniT4 | 3,794 | Add Mahalanobis distance metric | [] | closed | false | null | 0 | 2022-02-27T10:56:31Z | 2022-03-02T14:46:15Z | 2022-03-02T14:46:15Z | null | Mahalanobis distance is a very useful metric to measure the distance from one datapoint X to a distribution P.
In this PR I implement the metric in a simple way with the help of numpy only.
Similar to the [MAUVE implementation](https://github.com/huggingface/datasets/blob/master/metrics/mauve/mauve.py), we can make this metric accept texts as input and encode them with a featurize model, if that is desirable. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3794/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3794/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3794.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3794",
"merged_at": "2022-03-02T14:46:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3794.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3794"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/563 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/563/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/563/comments | https://api.github.com/repos/huggingface/datasets/issues/563/events | https://github.com/huggingface/datasets/pull/563 | 690,908,674 | MDExOlB1bGxSZXF1ZXN0NDc3NzI2MTEz | 563 | [Large datasets] Speed up download and processing | [] | closed | false | null | 2 | 2020-09-02T10:31:54Z | 2020-09-09T09:03:33Z | 2020-09-09T09:03:32Z | null | Various improvements to speed-up creation and processing of large scale datasets.
Currently:
- distributed downloads
- remove etag from datafiles hashes to spare a request when restarting a failed download | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/563/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/563/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/563.diff",
"html_url": "https://github.com/huggingface/datasets/pull/563",
"merged_at": "2020-09-09T09:03:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/563.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/563"
} | true | [
"Looks all good :)\r\nI rebased from master and added a test for parallel `map_nested`",
"you're da best"
] |
https://api.github.com/repos/huggingface/datasets/issues/5649 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5649/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5649/comments | https://api.github.com/repos/huggingface/datasets/issues/5649/events | https://github.com/huggingface/datasets/issues/5649 | 1,630,173,460 | I_kwDODunzps5hKnkU | 5,649 | The index column created with .to_sql() is dependent on the batch_size when writing | [] | closed | false | null | 2 | 2023-03-18T05:25:17Z | 2023-06-17T07:01:57Z | 2023-06-17T07:01:57Z | null | ### Describe the bug
It seems like the "index" column is designed to be unique? The values are only unique per batch. The SQL index is not a unique index.
This can be a problem, for instance, when building a faiss index on a dataset and then trying to match up ids with a sql export.
### Steps to reproduce the bug
```
from datasets import Dataset
import sqlite3
db = sqlite3.connect(":memory:")
nice_numbers = Dataset.from_dict({"nice_number": range(101,106)})
nice_numbers.to_sql("nice1", db, batch_size=1)
nice_numbers.to_sql("nice2", db, batch_size=2)
print(db.execute("select * from nice1").fetchall()) # [(0, 101), (0, 102), (0, 103), (0, 104), (0, 105)]
print(db.execute("select * from nice2").fetchall()) # [(0, 101), (1, 102), (0, 103), (1, 104), (0, 105)]
```
### Expected behavior
I expected the "index" column to be unique
### Environment info
```
% datasets-cli env
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.10.1
- Platform: macOS-13.2.1-arm64-arm-64bit
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.5.2
zsh: segmentation fault datasets-cli env
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5649/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5649/timeline | null | not_planned | null | null | false | [
"Thanks for reporting, @lsb. \r\n\r\nWe are investigating it.\r\n\r\nOn the other hand, please note that in the next `datasets` release, the index will not be created by default (see #5583). If you would like to have it, you will need to explicitly pass `index=True`. ",
"I think this is low enough priority for me to close this as Won't Fix. If I need any primary keys I can generate them beforehand. Feel free to reopen."
] |
https://api.github.com/repos/huggingface/datasets/issues/5830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5830/comments | https://api.github.com/repos/huggingface/datasets/issues/5830/events | https://github.com/huggingface/datasets/pull/5830 | 1,701,451,399 | PR_kwDODunzps5QEFEi | 5,830 | Debug windows #2 | [] | closed | false | null | 0 | 2023-05-09T06:40:34Z | 2023-05-09T06:40:47Z | 2023-05-09T06:40:47Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5830/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5830/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5830.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5830",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5830.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5830"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/356 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/356/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/356/comments | https://api.github.com/repos/huggingface/datasets/issues/356/events | https://github.com/huggingface/datasets/pull/356 | 653,537,388 | MDExOlB1bGxSZXF1ZXN0NDQ2NDM3MDQ5 | 356 | Add text dataset | [] | closed | false | null | 0 | 2020-07-08T19:21:53Z | 2020-07-10T14:19:03Z | 2020-07-10T14:19:03Z | null | Usage:
```python
from nlp import load_dataset
dset = load_dataset("text", data_files="/path/to/file.txt")["train"]
```
I created a dummy_data.zip which contains three files: `train.txt`, `test.txt`, `dev.txt`. Each of these contains two lines. It passes
```bash
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_text
```
but I would like a second set of eyes to ensure I did it right.
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 3,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/356/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/356/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/356.diff",
"html_url": "https://github.com/huggingface/datasets/pull/356",
"merged_at": "2020-07-10T14:19:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/356.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/356"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2732 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2732/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2732/comments | https://api.github.com/repos/huggingface/datasets/issues/2732/events | https://github.com/huggingface/datasets/pull/2732 | 956,676,360 | MDExOlB1bGxSZXF1ZXN0NzAwMjMzMzQy | 2,732 | Updated TTC4900 Dataset | [] | closed | false | null | 2 | 2021-07-30T11:52:14Z | 2021-07-30T16:00:51Z | 2021-07-30T15:58:14Z | null | - The source address of the TTC4900 dataset of [@savasy](https://github.com/savasy) has been updated for direct download.
- Updated readme. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2732/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2732/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2732.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2732",
"merged_at": "2021-07-30T15:58:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2732.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2732"
} | true | [
"@lhoestq, lรผtfen bu PR'ฤฑ gรถzden geรงirebilir misiniz?",
"> Thanks ! This looks all good now :)\r\n\r\nThanks"
] |
https://api.github.com/repos/huggingface/datasets/issues/5517 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5517/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5517/comments | https://api.github.com/repos/huggingface/datasets/issues/5517/events | https://github.com/huggingface/datasets/issues/5517 | 1,577,976,608 | I_kwDODunzps5eDgMg | 5,517 | `with_format("numpy")` silently downcasts float64 to float32 features | [] | open | false | {
"closed_at": null,
"closed_issues": 0,
"created_at": "2023-02-13T16:22:42Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
},
"description": "Next major release",
"due_on": null,
"html_url": "https://github.com/huggingface/datasets/milestone/10",
"id": 9038583,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels",
"node_id": "MI_kwDODunzps4Aier3",
"number": 10,
"open_issues": 3,
"state": "open",
"title": "3.0",
"updated_at": "2023-04-12T17:00:57Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/10"
} | 10 | 2023-02-09T14:18:00Z | 2023-02-14T15:38:54Z | null | null | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print("feature dtype:", dataset.features['a'].dtype)
print("array dtype:", dataset['a'].dtype)
```
output:
```
feature dtype: float64
array dtype: float32
```
### Expected behavior
```
feature dtype: float64
array dtype: float64
```
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 10.0.1
- Pandas version: 1.4.4
### Suggested Fix
Changing [the `_tensorize` function of the numpy formatter](https://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/np_formatter.py#L32) to
```python
def _tensorize(self, value):
if isinstance(value, (str, bytes, type(None))):
return value
elif isinstance(value, (np.character, np.ndarray)) and np.issubdtype(value.dtype, np.character):
return value
elif isinstance(value, np.number):
return value
return np.asarray(value, **self.np_array_kwargs)
```
fixes this particular issue for me. Not sure if this would break other tests. This should also avoid unnecessary copying of the array.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5517/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5517/timeline | null | null | null | null | false | [
"Hi! This behavior stems from these lines:\r\n\r\nhttps://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/np_formatter.py#L45-L46\r\n\r\nI agree we should preserve the original type whenever possible and downcast explicitly with a warning.\r\n\r\n@lhoestq Do you remember why we need this \"default dtype\" logic in our formatters?",
"I was also wondering why the default type logic is needed. Me just deleting it is probably too naive of a solution.",
"Hmm I think the idea was to end up with the usual default precision for deep learning models - no matter how the data was stored or where it comes from.\r\n\r\nFor example in NLP we store tokens using an optimized low precision to save disk space, but when we set the format to `torch` we actually need to get `int64`. Although the need for a default for integers also comes from numpy not returning the same integer precision depending on your machine. Finally I guess we added a default for floats as well for consistency.\r\n\r\nI'm a bit embarrassed by this though, as a user I'd have expected to get the same precision indeed as well and get a zero copy view.",
"Will you fix this or should I open a PR?",
"Unfortunately removing it for integers is a breaking change for most `transformers` + `datasets` users for NLP (which is a common case). Removing it for floats is a breaking change for `transformers` + `datasets` for ASR as well. And it also is a breaking change for the other users relying on this behavior.\r\n\r\nTherefore I think that the only short term solution is for the user to provide `dtype=` manually and document better this behavior. We could also extend `dtype` to accept a value that means \"return the same dtype as the underlying storage\" and make it easier to do zero copy.",
"@lhoestq It should be fine to remove this conversion in Datasets 3.0, no? For now, we can warn the user (with a log message) about the future change when the default type is changed.",
"Let's see with the transformers team if it sounds reasonable ? We'd have to fix multiple example scripts though.\r\n\r\nIf it's not ok we can also explore keeping this behavior only for tokens and audio data.",
"IMO being coupled with Transformers can lead to unexpected behavior when one tries to use our lib without pairing it with Transformers, so I think it's still important to \"fix\" this, even if it means we will need to update Transformers' example scripts afterward.\r\n",
"Ideally let's update the `transformers` example scripts before the change :P",
"For others that run into the same issue: A temporary workaround for me is this:\r\n```python\r\ndef numpy_transform(batch):\r\n return {key: np.asarray(val) for key, val in batch.items()}\r\n\r\ndataset = dataset.with_transform(numpy_transform)\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/3940 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3940/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3940/comments | https://api.github.com/repos/huggingface/datasets/issues/3940/events | https://github.com/huggingface/datasets/pull/3940 | 1,171,106,853 | PR_kwDODunzps40iYxr | 3,940 | Create CoVAL metric card | [] | closed | false | null | 1 | 2022-03-16T14:31:49Z | 2022-03-18T17:37:59Z | 2022-03-18T17:35:14Z | null | Initial CoVAL metric card | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3940/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3940/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3940.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3940",
"merged_at": "2022-03-18T17:35:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3940.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3940"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3342/comments | https://api.github.com/repos/huggingface/datasets/issues/3342/events | https://github.com/huggingface/datasets/pull/3342 | 1,067,481,390 | PR_kwDODunzps4vM3wh | 3,342 | Fix ASSET dataset data URLs | [] | closed | false | null | 1 | 2021-11-30T17:13:30Z | 2021-12-14T14:50:00Z | 2021-12-14T14:50:00Z | null | Change the branch name "master" to "main" in the data URLs, since facebookresearch has changed that. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3342/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3342/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3342.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3342",
"merged_at": "2021-12-14T14:50:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3342.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3342"
} | true | [
"> Hi @tianjianjiang, thanks for the fix.\r\n> The links should also be updated in the `dataset_infos.json` file.\r\n> The failing tests are due to the missing tag in the header of the `README.md` file:\r\n\r\nHi @albertvillanova, thank you for the info! My apologies for the messy PR.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5844/comments | https://api.github.com/repos/huggingface/datasets/issues/5844/events | https://github.com/huggingface/datasets/issues/5844 | 1,705,907,812 | I_kwDODunzps5lrhZk | 5,844 | TypeError: Couldn't cast array of type struct<answer: struct<unanswerable: bool, answerType: string, free_form_answer: string, evidence: list<item: string>, evidenceAnnotate: list<item: string>, highlighted_evidence: list<item: string>>> to ... | [] | open | false | null | 0 | 2023-05-11T14:15:01Z | 2023-05-11T14:15:01Z | null | null | ### Describe the bug
TypeError: Couldn't cast array of type struct<answer: struct<unanswerable: bool, answerType: string, free_form_answer: string, evidence: list<item: string>, evidenceAnnotate: list<item: string>, highlighted_evidence: list<item: string>>> to {'answer': {'unanswerable': Value(dtype='bool', id=None), 'answerType': Value(dtype='string', id=None), 'free_form_answer': Value(dtype='string', id=None), 'evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'evidenceAnnotate': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'highlighted_evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'unanswerable': Value(dtype='bool', id=None), 'answerType': Value(dtype='string', id=None), 'free_form_answer': Value(dtype='string', id=None), 'evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'evidenceAnnotate': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'highlighted_evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
When I use _load_dataset()_ I get the error
`from datasets import load_dataset
datafiles = {'train': './data/train.json', 'validation': './data/validation.json', 'test': './data/test.json'}
raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache")
`
Detailed error information is as follows๏ผ
Traceback (most recent call last):
File "C:/Users/CHENJIALEI/Desktop/NLPCC2023/NLPCC23_SciMRC-main/test2.py", line 9, in <module>
raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache")
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\load.py", line 1747, in load_dataset
builder_instance.download_and_prepare(
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 814, in download_and_prepare
self._download_and_prepare(
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 905, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 1521, in _prepare_split
writer.write_table(table)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\arrow_writer.py", line 540, in write_table
pa_table = table_cast(pa_table, self._schema)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2069, in table_cast
return cast_table_to_schema(table, schema)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2031, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2031, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1740, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1740, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1867, in cast_array_to_feature
casted_values = _c(array.values, feature[0])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper
return func(array, *args, **kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1862, in cast_array_to_feature
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1862, in <listcomp>
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper
return func(array, *args, **kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1867, in cast_array_to_feature
casted_values = _c(array.values, feature[0])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper
return func(array, *args, **kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1913, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
It is successful when I load the data separately
`raw_data = load_dataset("json", data_files="./data/train.json", cache_dir="./cache")`
### Steps to reproduce the bug
1.from datasets import load_dataset
2.datafiles = {'train': './data/train.json', 'validation': './data/validation.json', 'test': './data/test.json'}
3.raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache")
### Expected behavior
Successfully load dataset
### Environment info
datasets == 2.6.1
pyarrow == 8.0.0
python == 3.8
platform:windows11 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5844/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5844/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2589 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2589/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2589/comments | https://api.github.com/repos/huggingface/datasets/issues/2589/events | https://github.com/huggingface/datasets/pull/2589 | 936,825,060 | MDExOlB1bGxSZXF1ZXN0NjgzNDc0OTQ0 | 2,589 | Support multilabel metrics | [] | closed | false | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
} | 5 | 2021-07-05T08:19:25Z | 2022-07-29T10:56:25Z | 2021-07-08T08:40:15Z | null | Currently, multilabel metrics are not supported because `predictions` and `references` are defined as `Value("int32")`.
This PR creates a new feature type `OptionalSequence` which can act as either `Value("int32")` or `Sequence(Value("int32"))`, depending on the data passed.
Close #2554. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2589/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2589/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2589.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2589",
"merged_at": "2021-07-08T08:40:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2589.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2589"
} | true | [
"Hi ! Thanks for the fix :)\r\n\r\nIf I understand correctly, `OptionalSequence` doesn't have an associated arrow type that we know in advance unlike the other feature types, because it depends on the type of the examples.\r\n\r\nFor example, I tested this and it raises an error:\r\n```python\r\nimport datasets as ds\r\nimport pyarrow as pa\r\n\r\nfeatures = ds.Features({\"a\": ds.features.OptionalSequence(ds.Value(\"int32\"))})\r\nbatch = {\"a\": [[0]]}\r\n\r\nwriter = ds.ArrowWriter(features=features, stream=pa.BufferOutputStream())\r\nwriter.write_batch(batch)\r\n# ArrowInvalid: Could not convert [0] with type list: tried to convert to int\r\n```\r\nThis error happens because `features.type` is `StructType(struct<a: int32>)`.\r\n\r\nAnother way to add support for multilabel would be to have several configurations for these metrics. By default it would set the features without sequences, and for the multi label configuration it would use features with sequences. Let me know what you think",
"Hi @lhoestq, thanks for your feedback :)\r\n\r\nDefinitely, your suggested approach is simpler. I am going to refactor all my PR unless we could envision some other use cases where an OptionalSequence might be convenient, but for now I can't think of any...",
"@albertvillanova @lhoestq I couldnt find the related docs in F1 card: https://huggingface.co/spaces/evaluate-metric/f1\r\n\r\nHow do I perform multilabel F1 evaluation using evaluate package?",
"I was going to transfer your question to the `evaluate` GitHub repository, but I saw you have already done it (and even opened a PR):\r\n- https://github.com/huggingface/evaluate/issues/219\r\n- https://github.com/huggingface/evaluate/pull/221\r\n\r\nThanks, @fcakyon. ",
"Sorry to bomb you on multiple channels :sweat_smile: @albertvillanova, I have solved my problems, and opened a PR so that others also don't get confused :+1: "
] |
https://api.github.com/repos/huggingface/datasets/issues/5309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5309/comments | https://api.github.com/repos/huggingface/datasets/issues/5309/events | https://github.com/huggingface/datasets/pull/5309 | 1,466,758,987 | PR_kwDODunzps5D0g1y | 5,309 | Close stream in `ArrowWriter.finalize` before inference error | [] | closed | false | null | 1 | 2022-11-28T16:59:39Z | 2022-12-07T12:55:20Z | 2022-12-07T12:52:15Z | null | Ensure the file stream is closed in `ArrowWriter.finalize` before raising the `SchemaInferenceError` to avoid the `PermissionError` on Windows in `incomplete_dir`'s `shutil.rmtree`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5309/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5309/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5309.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5309",
"merged_at": "2022-12-07T12:52:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5309.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5309"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5280/comments | https://api.github.com/repos/huggingface/datasets/issues/5280/events | https://github.com/huggingface/datasets/issues/5280 | 1,459,823,179 | I_kwDODunzps5XAyJL | 5,280 | Import error | [] | closed | false | null | 5 | 2022-11-22T12:56:43Z | 2022-12-15T19:57:40Z | 2022-12-15T19:57:40Z | null | https://github.com/huggingface/datasets/blob/cd3d8e637cfab62d352a3f4e5e60e96597b5f0e9/src/datasets/__init__.py#L28
Hy,
I have error at the above line. I have python version 3.8.13, the message says I need python>=3.7, which is True, but I think the if statement not working properly (or the message wrong) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5280/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5280/timeline | null | completed | null | null | false | [
"Hi ! Can you \r\n```python\r\nimport platform\r\nprint(platform.python_version())\r\n```\r\nto see that it returns ?",
"Hi,\n\n3.8.13\n\nGet Outlook for Android<https://aka.ms/AAb9ysg>\n________________________________\nFrom: Quentin Lhoest ***@***.***>\nSent: Tuesday, November 22, 2022 2:37:02 PM\nTo: huggingface/datasets ***@***.***>\nCc: feketedavid1012 ***@***.***>; Author ***@***.***>\nSubject: Re: [huggingface/datasets] Import error (Issue #5280)\n\n\nHi ! Can you\n\nimport platform\nprint(platform.python_version())\n\nto see that it returns ?\n\nโ\nReply to this email directly, view it on GitHub<https://github.com/huggingface/datasets/issues/5280#issuecomment-1323691385>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AJW7F5YGG32W6WABYC25NJTWJTD75ANCNFSM6AAAAAASHZJ2AU>.\nYou are receiving this because you authored the thread.Message ID: ***@***.***>\n",
"Then it should work as expected if you use the same python when using `datasets`\r\n\r\nPlease make sure you're running your code in the right environment",
"It's the right environment. But in if statement I have\n\"3.8.13\" < 3.7\nAnd in the error message is Python>=3.7 which is true in my case (3.8.13 is greater then 3.7), so I don't understand my python should be below the 3.7 which case the if statement is right, but the message is wrong, or above 3.7 which case if statement is wrong, error message is right.\n\nGet Outlook for Android<https://aka.ms/AAb9ysg>\n________________________________\nFrom: Quentin Lhoest ***@***.***>\nSent: Tuesday, November 22, 2022 2:41:43 PM\nTo: huggingface/datasets ***@***.***>\nCc: feketedavid1012 ***@***.***>; Author ***@***.***>\nSubject: Re: [huggingface/datasets] Import error (Issue #5280)\n\n\nThen it should work as expected if you use the same python when using datasets\n\nPlease make sure you're running your code in the right environment\n\nโ\nReply to this email directly, view it on GitHub<https://github.com/huggingface/datasets/issues/5280#issuecomment-1323697094>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AJW7F54JURTAJJWWDO2QGI3WJTERPANCNFSM6AAAAAASHZJ2AU>.\nYou are receiving this because you authored the thread.Message ID: ***@***.***>\n",
"If you're having an error then you're not running your code in the right environment."
] |
https://api.github.com/repos/huggingface/datasets/issues/2427 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2427/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2427/comments | https://api.github.com/repos/huggingface/datasets/issues/2427/events | https://github.com/huggingface/datasets/pull/2427 | 907,162,923 | MDExOlB1bGxSZXF1ZXN0NjU4MDUwMjAx | 2,427 | Add copyright info to MLSUM dataset | [] | closed | false | null | 2 | 2021-05-31T07:15:57Z | 2021-06-04T09:53:50Z | 2021-06-04T09:53:50Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2427/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2427/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2427.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2427",
"merged_at": "2021-06-04T09:53:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2427.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2427"
} | true | [
"Build fails but this change should not be the reason...",
"rebased on master"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/579 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/579/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/579/comments | https://api.github.com/repos/huggingface/datasets/issues/579/events | https://github.com/huggingface/datasets/pull/579 | 694,947,599 | MDExOlB1bGxSZXF1ZXN0NDgxMjU1OTI5 | 579 | Doc metrics | [] | closed | false | null | 0 | 2020-09-07T10:15:24Z | 2020-09-10T13:06:11Z | 2020-09-10T13:06:10Z | null | Adding documentation on metrics loading/using/sharing | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/579/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/579/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/579.diff",
"html_url": "https://github.com/huggingface/datasets/pull/579",
"merged_at": "2020-09-10T13:06:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/579.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/579"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/556 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/556/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/556/comments | https://api.github.com/repos/huggingface/datasets/issues/556/events | https://github.com/huggingface/datasets/pull/556 | 690,218,423 | MDExOlB1bGxSZXF1ZXN0NDc3MTQ0MTky | 556 | Add DailyDialog | [] | closed | false | null | 0 | 2020-09-01T15:01:15Z | 2020-09-03T15:42:03Z | 2020-09-03T15:38:39Z | null | http://yanran.li/dailydialog.html
https://arxiv.org/pdf/1710.03957.pdf
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/556/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/556/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/556.diff",
"html_url": "https://github.com/huggingface/datasets/pull/556",
"merged_at": "2020-09-03T15:38:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/556.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/556"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4660 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4660/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4660/comments | https://api.github.com/repos/huggingface/datasets/issues/4660/events | https://github.com/huggingface/datasets/pull/4660 | 1,297,128,387 | PR_kwDODunzps47AYDq | 4,660 | Fix _resolve_single_pattern_locally on Windows with multiple drives | [] | closed | false | null | 2 | 2022-07-07T09:57:30Z | 2022-07-07T17:03:36Z | 2022-07-07T16:52:07Z | null | Currently, when `_resolve_single_pattern_locally` is called from a different drive than the one in `pattern`, it raises an exception:
```
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\io\parquet.py:35: in __init__
**kwargs,
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\builder.py:287: in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:761: in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:723: in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:321: in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:239: in _resolve_single_pattern_locally
for filepath in glob_iter
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:242: in <listcomp>
os.path.relpath(filepath, base_path), os.path.relpath(pattern, base_path)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
path = 'C:\\Users\\runneradmin\\AppData\\Local\\Temp\\pytest-of-runneradmin\\pytest-0\\popen-gw0\\data6\\dataset.parquet'
start = '/'
...
E ValueError: path is on mount 'C:', start on mount 'D:'
```
This PR makes sure that `base_path` is in the same drive as `pattern`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4660/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4660/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4660.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4660",
"merged_at": "2022-07-07T16:52:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4660.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4660"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Good catch ! Sorry I forgot (again) about windows paths when writing this x)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4142/comments | https://api.github.com/repos/huggingface/datasets/issues/4142/events | https://github.com/huggingface/datasets/issues/4142 | 1,199,794,750 | I_kwDODunzps5Hg2o- | 4,142 | Add ObjectFolder 2.0 dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 1 | 2022-04-11T10:57:51Z | 2022-10-05T10:30:49Z | null | null | ## Adding a Dataset
- **Name:** ObjectFolder 2.0
- **Description:** ObjectFolder 2.0 is a dataset of 1,000 objects in the form of implicit representations. It contains 1,000 Object Files each containing the complete multisensory profile for an object instance.
- **Paper:** [*link to the dataset paper if available*](https://arxiv.org/abs/2204.02389)
- **Data:** https://github.com/rhgao/ObjectFolder
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4142/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4142/timeline | null | null | null | null | false | [
"Datasets are not tracked in this repository anymore."
] |
https://api.github.com/repos/huggingface/datasets/issues/1888 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1888/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1888/comments | https://api.github.com/repos/huggingface/datasets/issues/1888/events | https://github.com/huggingface/datasets/pull/1888 | 809,241,123 | MDExOlB1bGxSZXF1ZXN0NTc0MTM2MDU4 | 1,888 | Docs for adding new column on formatted dataset | [] | closed | false | null | 1 | 2021-02-16T11:45:00Z | 2021-03-30T14:01:03Z | 2021-02-16T11:58:57Z | null | As mentioned in #1872 we should add in the documentation how the format gets updated when new columns are added
Close #1872 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1888/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1888/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1888.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1888",
"merged_at": "2021-02-16T11:58:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1888.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1888"
} | true | [
"Close #1872"
] |
https://api.github.com/repos/huggingface/datasets/issues/3646 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3646/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3646/comments | https://api.github.com/repos/huggingface/datasets/issues/3646/events | https://github.com/huggingface/datasets/pull/3646 | 1,116,544,627 | PR_kwDODunzps4xsX66 | 3,646 | Fix streaming datasets that are not reset correctly | [] | closed | false | null | 1 | 2022-01-27T17:21:02Z | 2022-01-28T16:34:29Z | 2022-01-28T16:34:28Z | null | Streaming datasets that use `StreamingDownloadManager.iter_archive` and `StreamingDownloadManager.iter_files` had some issues. Indeed if you try to iterate over such dataset twice, then the second time it will be empty.
This is because the two methods above are generator functions. I fixed this by making them return iterables that are reset properly instead.
Close https://github.com/huggingface/datasets/issues/3645
cc @anton-l | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3646/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3646/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3646.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3646",
"merged_at": "2022-01-28T16:34:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3646.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3646"
} | true | [
"Works smoothly with the `transformers.Trainer` class now, thank you!"
] |
https://api.github.com/repos/huggingface/datasets/issues/5469 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5469/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5469/comments | https://api.github.com/repos/huggingface/datasets/issues/5469/events | https://github.com/huggingface/datasets/pull/5469 | 1,558,346,906 | PR_kwDODunzps5Imhk2 | 5,469 | Remove deprecated `shard_size` arg from `.push_to_hub()` | [] | closed | false | null | 2 | 2023-01-26T15:40:56Z | 2023-01-26T17:37:51Z | 2023-01-26T17:30:59Z | null | The docstrings say that it was supposed to be deprecated since version 2.4.0, can we remove it? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5469/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5469/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5469.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5469",
"merged_at": "2023-01-26T17:30:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5469.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5469"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008272 / 0.011353 (-0.003081) | 0.004494 / 0.011008 (-0.006515) | 0.100764 / 0.038508 (0.062256) | 0.028741 / 0.023109 (0.005632) | 0.309020 / 0.275898 (0.033122) | 0.354184 / 0.323480 (0.030704) | 0.007455 / 0.007986 (-0.000531) | 0.003377 / 0.004328 (-0.000951) | 0.078472 / 0.004250 (0.074222) | 0.034719 / 0.037052 (-0.002333) | 0.312787 / 0.258489 (0.054298) | 0.342878 / 0.293841 (0.049037) | 0.033326 / 0.128546 (-0.095221) | 0.011519 / 0.075646 (-0.064127) | 0.323556 / 0.419271 (-0.095716) | 0.039929 / 0.043533 (-0.003604) | 0.304627 / 0.255139 (0.049488) | 0.322876 / 0.283200 (0.039677) | 0.086410 / 0.141683 (-0.055273) | 1.502607 / 1.452155 (0.050453) | 1.577953 / 1.492716 (0.085237) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192861 / 0.018006 (0.174855) | 0.406008 / 0.000490 (0.405519) | 0.001075 / 0.000200 (0.000875) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023351 / 0.037411 (-0.014060) | 0.096086 / 0.014526 (0.081561) | 0.104641 / 0.176557 (-0.071915) | 0.141940 / 0.737135 (-0.595195) | 0.109266 / 0.296338 (-0.187073) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416496 / 0.215209 (0.201287) | 4.161581 / 2.077655 (2.083926) | 1.815357 / 1.504120 (0.311238) | 1.609536 / 1.541195 (0.068341) | 1.654105 / 1.468490 (0.185615) | 0.693947 / 4.584777 (-3.890830) | 3.349029 / 3.745712 (-0.396683) | 1.883968 / 5.269862 (-3.385893) | 1.287988 / 4.565676 (-3.277688) | 0.081765 / 0.424275 (-0.342511) | 0.012373 / 0.007607 (0.004766) | 0.517186 / 0.226044 (0.291142) | 5.200892 / 2.268929 (2.931964) | 2.247414 / 55.444624 (-53.197211) | 1.910601 / 6.876477 (-4.965876) | 1.965407 / 2.142072 (-0.176666) | 0.814386 / 4.805227 (-3.990841) | 0.149295 / 6.500664 (-6.351369) | 0.064667 / 0.075469 (-0.010802) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.247258 / 1.841788 (-0.594530) | 13.837355 / 8.074308 (5.763047) | 13.850454 / 10.191392 (3.659062) | 0.136078 / 0.680424 (-0.544346) | 0.028322 / 0.534201 (-0.505878) | 0.391394 / 0.579283 (-0.187889) | 0.407494 / 0.434364 (-0.026870) | 0.473784 / 0.540337 (-0.066554) | 0.562953 / 1.386936 (-0.823983) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006559 / 0.011353 (-0.004794) | 0.004546 / 0.011008 (-0.006462) | 0.099527 / 0.038508 (0.061019) | 0.027428 / 0.023109 (0.004319) | 0.344276 / 0.275898 (0.068377) | 0.377897 / 0.323480 (0.054417) | 0.004913 / 0.007986 (-0.003072) | 0.003338 / 0.004328 (-0.000990) | 0.077589 / 0.004250 (0.073339) | 0.038819 / 0.037052 (0.001766) | 0.343165 / 0.258489 (0.084676) | 0.386228 / 0.293841 (0.092387) | 0.031753 / 0.128546 (-0.096794) | 0.011756 / 0.075646 (-0.063890) | 0.322537 / 0.419271 (-0.096735) | 0.049865 / 0.043533 (0.006332) | 0.340493 / 0.255139 (0.085354) | 0.372179 / 0.283200 (0.088980) | 0.099669 / 0.141683 (-0.042013) | 1.487841 / 1.452155 (0.035686) | 1.527400 / 1.492716 (0.034683) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180782 / 0.018006 (0.162776) | 0.393494 / 0.000490 (0.393004) | 0.003004 / 0.000200 (0.002804) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024997 / 0.037411 (-0.012415) | 0.098232 / 0.014526 (0.083707) | 0.107869 / 0.176557 (-0.068688) | 0.141042 / 0.737135 (-0.596093) | 0.109551 / 0.296338 (-0.186787) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.477115 / 0.215209 (0.261906) | 4.783928 / 2.077655 (2.706273) | 2.435725 / 1.504120 (0.931605) | 2.233111 / 1.541195 (0.691916) | 2.341097 / 1.468490 (0.872607) | 0.694304 / 4.584777 (-3.890473) | 3.345687 / 3.745712 (-0.400025) | 1.886932 / 5.269862 (-3.382929) | 1.155585 / 4.565676 (-3.410092) | 0.082867 / 0.424275 (-0.341408) | 0.012420 / 0.007607 (0.004813) | 0.576575 / 0.226044 (0.350530) | 5.777691 / 2.268929 (3.508762) | 2.882219 / 55.444624 (-52.562405) | 2.543613 / 6.876477 (-4.332864) | 2.578939 / 2.142072 (0.436866) | 0.803143 / 4.805227 (-4.002084) | 0.151929 / 6.500664 (-6.348735) | 0.067777 / 0.075469 (-0.007693) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.282711 / 1.841788 (-0.559077) | 13.942771 / 8.074308 (5.868463) | 13.376206 / 10.191392 (3.184814) | 0.152916 / 0.680424 (-0.527508) | 0.016619 / 0.534201 (-0.517582) | 0.375141 / 0.579283 (-0.204142) | 0.381660 / 0.434364 (-0.052704) | 0.465090 / 0.540337 (-0.075247) | 0.555068 / 1.386936 (-0.831868) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/6028 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6028/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6028/comments | https://api.github.com/repos/huggingface/datasets/issues/6028/events | https://github.com/huggingface/datasets/pull/6028 | 1,803,294,981 | PR_kwDODunzps5Vb3LJ | 6,028 | Use new hffs | [] | closed | false | null | 13 | 2023-07-13T15:41:44Z | 2023-07-17T17:09:39Z | 2023-07-17T17:01:00Z | null | Thanks to @janineguo 's work in https://github.com/huggingface/datasets/pull/5919 which was needed to support HfFileSystem.
Switching to `HfFileSystem` will help implementing optimization in data files resolution
## Implementation details
I replaced all the from_hf_repo and from_local_or_remote in data_files.py to only use a new `from_patterns` which works for any fsspec path, including hf:// paths, https:// URLs and local paths. This simplifies the codebase since there is no logic duplication anymore when it comes to data files resolution.
I added `_prepare_path_and_storage_options` which returns the right storage_options to use given a path and a `DownloadConfig`. This is the only place where the logic depends on the filesystem type that must be used.
I also removed the `get_metadata_data_files_list ` and `get_patterns_and_data_files` functions added recently, since data files resolution is now handled using a common interface.
## New features
hf:// paths are now supported in data_files
## Breaking changes
DataFilesList and DataFilesDict:
- use `str` paths instead of `Union[Path, Url]`
- require posix paths for windows paths
close https://github.com/huggingface/datasets/issues/6017 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6028/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6028/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6028.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6028",
"merged_at": "2023-07-17T17:01:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6028.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6028"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006665 / 0.011353 (-0.004688) | 0.004376 / 0.011008 (-0.006633) | 0.085529 / 0.038508 (0.047021) | 0.076372 / 0.023109 (0.053263) | 0.310019 / 0.275898 (0.034121) | 0.341404 / 0.323480 (0.017924) | 0.005666 / 0.007986 (-0.002320) | 0.003763 / 0.004328 (-0.000566) | 0.064678 / 0.004250 (0.060427) | 0.059283 / 0.037052 (0.022231) | 0.316194 / 0.258489 (0.057704) | 0.349397 / 0.293841 (0.055557) | 0.031199 / 0.128546 (-0.097347) | 0.008724 / 0.075646 (-0.066923) | 0.300236 / 0.419271 (-0.119035) | 0.068872 / 0.043533 (0.025339) | 0.308521 / 0.255139 (0.053382) | 0.331292 / 0.283200 (0.048092) | 0.028236 / 0.141683 (-0.113447) | 1.501365 / 1.452155 (0.049211) | 1.554334 / 1.492716 (0.061618) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238291 / 0.018006 (0.220285) | 0.565069 / 0.000490 (0.564580) | 0.001626 / 0.000200 (0.001426) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029777 / 0.037411 (-0.007634) | 0.082873 / 0.014526 (0.068347) | 0.099619 / 0.176557 (-0.076937) | 0.156572 / 0.737135 (-0.580563) | 0.099887 / 0.296338 (-0.196452) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401017 / 0.215209 (0.185808) | 3.827192 / 2.077655 (1.749537) | 1.861554 / 1.504120 (0.357434) | 1.699869 / 1.541195 (0.158674) | 1.720043 / 1.468490 (0.251553) | 0.486757 / 4.584777 (-4.098020) | 3.638125 / 3.745712 (-0.107587) | 5.844959 / 5.269862 (0.575097) | 3.454901 / 4.565676 (-1.110775) | 0.057650 / 0.424275 (-0.366625) | 0.007341 / 0.007607 (-0.000266) | 0.462698 / 0.226044 (0.236654) | 4.633472 / 2.268929 (2.364544) | 2.287607 / 55.444624 (-53.157017) | 2.057318 / 6.876477 (-4.819159) | 2.203657 / 2.142072 (0.061584) | 0.598136 / 4.805227 (-4.207091) | 0.134012 / 6.500664 (-6.366653) | 0.060824 / 0.075469 (-0.014645) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.277752 / 1.841788 (-0.564036) | 20.013398 / 8.074308 (11.939089) | 14.372993 / 10.191392 (4.181601) | 0.169991 / 0.680424 (-0.510433) | 0.018344 / 0.534201 (-0.515857) | 0.396985 / 0.579283 (-0.182299) | 0.416289 / 0.434364 (-0.018075) | 0.458658 / 0.540337 (-0.081680) | 0.692980 / 1.386936 (-0.693956) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006689 / 0.011353 (-0.004664) | 0.004393 / 0.011008 (-0.006615) | 0.064069 / 0.038508 (0.025561) | 0.080717 / 0.023109 (0.057607) | 0.370090 / 0.275898 (0.094191) | 0.400432 / 0.323480 (0.076952) | 0.005613 / 0.007986 (-0.002372) | 0.003641 / 0.004328 (-0.000687) | 0.064771 / 0.004250 (0.060520) | 0.057555 / 0.037052 (0.020502) | 0.392156 / 0.258489 (0.133667) | 0.409842 / 0.293841 (0.116001) | 0.031500 / 0.128546 (-0.097047) | 0.008786 / 0.075646 (-0.066860) | 0.070342 / 0.419271 (-0.348929) | 0.048646 / 0.043533 (0.005113) | 0.360914 / 0.255139 (0.105775) | 0.387626 / 0.283200 (0.104426) | 0.022787 / 0.141683 (-0.118896) | 1.508915 / 1.452155 (0.056761) | 1.539719 / 1.492716 (0.047002) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257985 / 0.018006 (0.239979) | 0.550990 / 0.000490 (0.550501) | 0.000407 / 0.000200 (0.000207) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030183 / 0.037411 (-0.007228) | 0.086882 / 0.014526 (0.072356) | 0.102382 / 0.176557 (-0.074175) | 0.154745 / 0.737135 (-0.582390) | 0.104008 / 0.296338 (-0.192331) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426284 / 0.215209 (0.211075) | 4.240812 / 2.077655 (2.163158) | 2.261240 / 1.504120 (0.757120) | 2.085905 / 1.541195 (0.544710) | 2.160374 / 1.468490 (0.691883) | 0.481126 / 4.584777 (-4.103651) | 3.516234 / 3.745712 (-0.229478) | 3.325322 / 5.269862 (-1.944539) | 2.043307 / 4.565676 (-2.522369) | 0.056663 / 0.424275 (-0.367612) | 0.007786 / 0.007607 (0.000179) | 0.497614 / 0.226044 (0.271570) | 4.974529 / 2.268929 (2.705600) | 2.700018 / 55.444624 (-52.744606) | 2.393778 / 6.876477 (-4.482699) | 2.628202 / 2.142072 (0.486130) | 0.594316 / 4.805227 (-4.210911) | 0.147092 / 6.500664 (-6.353572) | 0.062207 / 0.075469 (-0.013262) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.315676 / 1.841788 (-0.526112) | 20.749251 / 8.074308 (12.674943) | 14.371553 / 10.191392 (4.180160) | 0.170249 / 0.680424 (-0.510175) | 0.018478 / 0.534201 (-0.515722) | 0.395710 / 0.579283 (-0.183573) | 0.409706 / 0.434364 (-0.024658) | 0.463454 / 0.540337 (-0.076884) | 0.615657 / 1.386936 (-0.771279) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007224 / 0.011353 (-0.004129) | 0.004506 / 0.011008 (-0.006503) | 0.096729 / 0.038508 (0.058221) | 0.082394 / 0.023109 (0.059284) | 0.390954 / 0.275898 (0.115056) | 0.416647 / 0.323480 (0.093167) | 0.005894 / 0.007986 (-0.002092) | 0.003756 / 0.004328 (-0.000572) | 0.075800 / 0.004250 (0.071549) | 0.062683 / 0.037052 (0.025631) | 0.398959 / 0.258489 (0.140470) | 0.436624 / 0.293841 (0.142783) | 0.034650 / 0.128546 (-0.093896) | 0.009655 / 0.075646 (-0.065991) | 0.315761 / 0.419271 (-0.103511) | 0.060957 / 0.043533 (0.017424) | 0.385649 / 0.255139 (0.130510) | 0.394022 / 0.283200 (0.110822) | 0.024601 / 0.141683 (-0.117082) | 1.729586 / 1.452155 (0.277431) | 1.724153 / 1.492716 (0.231437) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207070 / 0.018006 (0.189063) | 0.466502 / 0.000490 (0.466012) | 0.010739 / 0.000200 (0.010540) | 0.000214 / 0.000054 (0.000160) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031633 / 0.037411 (-0.005779) | 0.095345 / 0.014526 (0.080819) | 0.105399 / 0.176557 (-0.071157) | 0.174173 / 0.737135 (-0.562962) | 0.104207 / 0.296338 (-0.192132) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.435312 / 0.215209 (0.220103) | 4.265600 / 2.077655 (2.187946) | 2.056500 / 1.504120 (0.552380) | 1.848023 / 1.541195 (0.306828) | 1.946156 / 1.468490 (0.477666) | 0.557788 / 4.584777 (-4.026989) | 4.070289 / 3.745712 (0.324577) | 3.608027 / 5.269862 (-1.661835) | 2.214556 / 4.565676 (-2.351121) | 0.062623 / 0.424275 (-0.361652) | 0.008083 / 0.007607 (0.000476) | 0.491782 / 0.226044 (0.265738) | 4.989963 / 2.268929 (2.721035) | 2.575867 / 55.444624 (-52.868757) | 2.208045 / 6.876477 (-4.668431) | 2.364184 / 2.142072 (0.222112) | 0.633925 / 4.805227 (-4.171302) | 0.144323 / 6.500664 (-6.356341) | 0.067505 / 0.075469 (-0.007965) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.467219 / 1.841788 (-0.374569) | 22.334967 / 8.074308 (14.260659) | 15.715747 / 10.191392 (5.524355) | 0.175443 / 0.680424 (-0.504980) | 0.026165 / 0.534201 (-0.508036) | 0.490675 / 0.579283 (-0.088608) | 0.509211 / 0.434364 (0.074847) | 0.586303 / 0.540337 (0.045965) | 0.785052 / 1.386936 (-0.601884) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007893 / 0.011353 (-0.003460) | 0.004577 / 0.011008 (-0.006431) | 0.075781 / 0.038508 (0.037273) | 0.095492 / 0.023109 (0.072382) | 0.433259 / 0.275898 (0.157361) | 0.469386 / 0.323480 (0.145906) | 0.006317 / 0.007986 (-0.001669) | 0.003708 / 0.004328 (-0.000621) | 0.074417 / 0.004250 (0.070167) | 0.068605 / 0.037052 (0.031552) | 0.448701 / 0.258489 (0.190212) | 0.469131 / 0.293841 (0.175290) | 0.036647 / 0.128546 (-0.091899) | 0.010077 / 0.075646 (-0.065570) | 0.082457 / 0.419271 (-0.336815) | 0.063255 / 0.043533 (0.019722) | 0.428144 / 0.255139 (0.173005) | 0.451872 / 0.283200 (0.168672) | 0.033953 / 0.141683 (-0.107730) | 1.781752 / 1.452155 (0.329597) | 1.869014 / 1.492716 (0.376297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223596 / 0.018006 (0.205590) | 0.470307 / 0.000490 (0.469818) | 0.005059 / 0.000200 (0.004859) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038804 / 0.037411 (0.001393) | 0.117879 / 0.014526 (0.103353) | 0.140701 / 0.176557 (-0.035855) | 0.194672 / 0.737135 (-0.542463) | 0.132806 / 0.296338 (-0.163533) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.510109 / 0.215209 (0.294900) | 4.729457 / 2.077655 (2.651803) | 2.512113 / 1.504120 (1.007993) | 2.302553 / 1.541195 (0.761358) | 2.420462 / 1.468490 (0.951972) | 0.531682 / 4.584777 (-4.053095) | 4.061208 / 3.745712 (0.315496) | 3.588542 / 5.269862 (-1.681320) | 2.203187 / 4.565676 (-2.362489) | 0.065791 / 0.424275 (-0.358484) | 0.008839 / 0.007607 (0.001232) | 0.562041 / 0.226044 (0.335997) | 5.702340 / 2.268929 (3.433412) | 3.127609 / 55.444624 (-52.317015) | 2.823060 / 6.876477 (-4.053417) | 2.898675 / 2.142072 (0.756603) | 0.659589 / 4.805227 (-4.145638) | 0.148798 / 6.500664 (-6.351866) | 0.070787 / 0.075469 (-0.004682) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.478317 / 1.841788 (-0.363471) | 21.995400 / 8.074308 (13.921092) | 16.770729 / 10.191392 (6.579337) | 0.226333 / 0.680424 (-0.454091) | 0.021835 / 0.534201 (-0.512366) | 0.460373 / 0.579283 (-0.118910) | 0.479494 / 0.434364 (0.045130) | 0.529470 / 0.540337 (-0.010868) | 0.718066 / 1.386936 (-0.668870) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007824 / 0.011353 (-0.003529) | 0.004601 / 0.011008 (-0.006407) | 0.100025 / 0.038508 (0.061517) | 0.096046 / 0.023109 (0.072936) | 0.376226 / 0.275898 (0.100328) | 0.410905 / 0.323480 (0.087425) | 0.006048 / 0.007986 (-0.001938) | 0.003817 / 0.004328 (-0.000511) | 0.076624 / 0.004250 (0.072374) | 0.066390 / 0.037052 (0.029338) | 0.380098 / 0.258489 (0.121609) | 0.413603 / 0.293841 (0.119762) | 0.036546 / 0.128546 (-0.092001) | 0.009881 / 0.075646 (-0.065765) | 0.344338 / 0.419271 (-0.074934) | 0.061882 / 0.043533 (0.018350) | 0.368568 / 0.255139 (0.113429) | 0.397133 / 0.283200 (0.113934) | 0.027255 / 0.141683 (-0.114428) | 1.795099 / 1.452155 (0.342945) | 1.852443 / 1.492716 (0.359727) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.247436 / 0.018006 (0.229430) | 0.494119 / 0.000490 (0.493629) | 0.004359 / 0.000200 (0.004159) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034765 / 0.037411 (-0.002647) | 0.104541 / 0.014526 (0.090015) | 0.113898 / 0.176557 (-0.062659) | 0.183634 / 0.737135 (-0.553501) | 0.116423 / 0.296338 (-0.179916) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.458747 / 0.215209 (0.243538) | 4.555740 / 2.077655 (2.478085) | 2.217240 / 1.504120 (0.713121) | 2.039879 / 1.541195 (0.498684) | 2.088581 / 1.468490 (0.620091) | 0.588063 / 4.584777 (-3.996714) | 4.238226 / 3.745712 (0.492514) | 4.768060 / 5.269862 (-0.501802) | 2.857117 / 4.565676 (-1.708560) | 0.068742 / 0.424275 (-0.355533) | 0.008667 / 0.007607 (0.001059) | 0.549294 / 0.226044 (0.323249) | 5.464635 / 2.268929 (3.195706) | 2.744435 / 55.444624 (-52.700189) | 2.347660 / 6.876477 (-4.528816) | 2.616816 / 2.142072 (0.474743) | 0.703701 / 4.805227 (-4.101526) | 0.159749 / 6.500664 (-6.340915) | 0.071990 / 0.075469 (-0.003479) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.486599 / 1.841788 (-0.355188) | 22.745438 / 8.074308 (14.671130) | 16.822332 / 10.191392 (6.630940) | 0.184730 / 0.680424 (-0.495694) | 0.021267 / 0.534201 (-0.512934) | 0.467108 / 0.579283 (-0.112176) | 0.472674 / 0.434364 (0.038311) | 0.548094 / 0.540337 (0.007756) | 0.735885 / 1.386936 (-0.651051) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007746 / 0.011353 (-0.003607) | 0.004585 / 0.011008 (-0.006423) | 0.076943 / 0.038508 (0.038435) | 0.087473 / 0.023109 (0.064363) | 0.480099 / 0.275898 (0.204201) | 0.495271 / 0.323480 (0.171791) | 0.006348 / 0.007986 (-0.001638) | 0.003902 / 0.004328 (-0.000426) | 0.077586 / 0.004250 (0.073335) | 0.066467 / 0.037052 (0.029415) | 0.468741 / 0.258489 (0.210252) | 0.506778 / 0.293841 (0.212937) | 0.036877 / 0.128546 (-0.091669) | 0.010102 / 0.075646 (-0.065545) | 0.084419 / 0.419271 (-0.334852) | 0.058721 / 0.043533 (0.015188) | 0.453633 / 0.255139 (0.198494) | 0.481171 / 0.283200 (0.197971) | 0.028716 / 0.141683 (-0.112967) | 1.853048 / 1.452155 (0.400893) | 1.885847 / 1.492716 (0.393130) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192136 / 0.018006 (0.174130) | 0.484481 / 0.000490 (0.483991) | 0.002951 / 0.000200 (0.002751) | 0.000098 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037949 / 0.037411 (0.000538) | 0.108364 / 0.014526 (0.093838) | 0.119542 / 0.176557 (-0.057014) | 0.188542 / 0.737135 (-0.548593) | 0.122011 / 0.296338 (-0.174327) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.483135 / 0.215209 (0.267926) | 4.849715 / 2.077655 (2.772060) | 2.497736 / 1.504120 (0.993616) | 2.314243 / 1.541195 (0.773048) | 2.412739 / 1.468490 (0.944249) | 0.564137 / 4.584777 (-4.020639) | 4.242273 / 3.745712 (0.496561) | 6.337843 / 5.269862 (1.067982) | 3.923250 / 4.565676 (-0.642426) | 0.066464 / 0.424275 (-0.357811) | 0.009217 / 0.007607 (0.001610) | 0.575667 / 0.226044 (0.349623) | 5.746187 / 2.268929 (3.477258) | 3.069655 / 55.444624 (-52.374969) | 2.674798 / 6.876477 (-4.201679) | 2.956535 / 2.142072 (0.814463) | 0.701043 / 4.805227 (-4.104185) | 0.157241 / 6.500664 (-6.343423) | 0.073175 / 0.075469 (-0.002294) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.609943 / 1.841788 (-0.231844) | 23.478594 / 8.074308 (15.404286) | 17.454437 / 10.191392 (7.263045) | 0.186422 / 0.680424 (-0.494002) | 0.021703 / 0.534201 (-0.512498) | 0.471704 / 0.579283 (-0.107579) | 0.480553 / 0.434364 (0.046189) | 0.552881 / 0.540337 (0.012544) | 0.722515 / 1.386936 (-0.664421) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007542 / 0.011353 (-0.003811) | 0.004692 / 0.011008 (-0.006316) | 0.099155 / 0.038508 (0.060647) | 0.089365 / 0.023109 (0.066256) | 0.370870 / 0.275898 (0.094972) | 0.422152 / 0.323480 (0.098673) | 0.006223 / 0.007986 (-0.001763) | 0.003852 / 0.004328 (-0.000476) | 0.075438 / 0.004250 (0.071188) | 0.065973 / 0.037052 (0.028921) | 0.381513 / 0.258489 (0.123024) | 0.416196 / 0.293841 (0.122355) | 0.035483 / 0.128546 (-0.093063) | 0.009884 / 0.075646 (-0.065762) | 0.341290 / 0.419271 (-0.077982) | 0.060546 / 0.043533 (0.017014) | 0.365101 / 0.255139 (0.109962) | 0.391058 / 0.283200 (0.107859) | 0.026325 / 0.141683 (-0.115358) | 1.815168 / 1.452155 (0.363013) | 1.834711 / 1.492716 (0.341994) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222177 / 0.018006 (0.204171) | 0.501151 / 0.000490 (0.500662) | 0.010202 / 0.000200 (0.010002) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034043 / 0.037411 (-0.003368) | 0.097884 / 0.014526 (0.083358) | 0.114022 / 0.176557 (-0.062534) | 0.186200 / 0.737135 (-0.550935) | 0.115555 / 0.296338 (-0.180783) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.485857 / 0.215209 (0.270648) | 4.959263 / 2.077655 (2.881608) | 2.501085 / 1.504120 (0.996965) | 2.234660 / 1.541195 (0.693465) | 2.238585 / 1.468490 (0.770095) | 0.645431 / 4.584777 (-3.939345) | 4.434311 / 3.745712 (0.688599) | 4.771491 / 5.269862 (-0.498371) | 2.778963 / 4.565676 (-1.786714) | 0.075615 / 0.424275 (-0.348660) | 0.009502 / 0.007607 (0.001895) | 0.546539 / 0.226044 (0.320495) | 5.464242 / 2.268929 (3.195314) | 2.894101 / 55.444624 (-52.550524) | 2.513761 / 6.876477 (-4.362715) | 2.719843 / 2.142072 (0.577770) | 0.678828 / 4.805227 (-4.126399) | 0.157839 / 6.500664 (-6.342825) | 0.071305 / 0.075469 (-0.004164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.496879 / 1.841788 (-0.344909) | 22.214452 / 8.074308 (14.140144) | 17.707541 / 10.191392 (7.516149) | 0.197008 / 0.680424 (-0.483416) | 0.024883 / 0.534201 (-0.509318) | 0.493611 / 0.579283 (-0.085672) | 0.500677 / 0.434364 (0.066313) | 0.569381 / 0.540337 (0.029044) | 0.773950 / 1.386936 (-0.612986) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007337 / 0.011353 (-0.004015) | 0.004572 / 0.011008 (-0.006436) | 0.091123 / 0.038508 (0.052615) | 0.079762 / 0.023109 (0.056652) | 0.450527 / 0.275898 (0.174629) | 0.525097 / 0.323480 (0.201617) | 0.005873 / 0.007986 (-0.002112) | 0.003797 / 0.004328 (-0.000532) | 0.076259 / 0.004250 (0.072009) | 0.062745 / 0.037052 (0.025692) | 0.465553 / 0.258489 (0.207064) | 0.546026 / 0.293841 (0.252186) | 0.035638 / 0.128546 (-0.092909) | 0.010086 / 0.075646 (-0.065560) | 0.109269 / 0.419271 (-0.310002) | 0.056765 / 0.043533 (0.013233) | 0.440887 / 0.255139 (0.185748) | 0.513325 / 0.283200 (0.230125) | 0.027206 / 0.141683 (-0.114476) | 1.863564 / 1.452155 (0.411409) | 1.918206 / 1.492716 (0.425490) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266479 / 0.018006 (0.248473) | 0.487971 / 0.000490 (0.487481) | 0.012246 / 0.000200 (0.012046) | 0.000119 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035281 / 0.037411 (-0.002130) | 0.102991 / 0.014526 (0.088465) | 0.114638 / 0.176557 (-0.061919) | 0.184117 / 0.737135 (-0.553018) | 0.117943 / 0.296338 (-0.178396) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.497897 / 0.215209 (0.282688) | 4.973806 / 2.077655 (2.896151) | 2.596146 / 1.504120 (1.092026) | 2.419694 / 1.541195 (0.878499) | 2.525784 / 1.468490 (1.057294) | 0.568021 / 4.584777 (-4.016756) | 4.296431 / 3.745712 (0.550719) | 3.690682 / 5.269862 (-1.579179) | 2.345965 / 4.565676 (-2.219712) | 0.066859 / 0.424275 (-0.357416) | 0.009093 / 0.007607 (0.001486) | 0.582616 / 0.226044 (0.356571) | 5.826528 / 2.268929 (3.557600) | 3.253222 / 55.444624 (-52.191403) | 2.798447 / 6.876477 (-4.078030) | 3.054609 / 2.142072 (0.912537) | 0.678816 / 4.805227 (-4.126411) | 0.157966 / 6.500664 (-6.342698) | 0.073797 / 0.075469 (-0.001672) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.599480 / 1.841788 (-0.242308) | 23.249738 / 8.074308 (15.175430) | 16.965406 / 10.191392 (6.774014) | 0.171390 / 0.680424 (-0.509034) | 0.021810 / 0.534201 (-0.512391) | 0.483339 / 0.579283 (-0.095944) | 0.496615 / 0.434364 (0.062251) | 0.583786 / 0.540337 (0.043448) | 0.741699 / 1.386936 (-0.645237) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006054 / 0.011353 (-0.005299) | 0.003706 / 0.011008 (-0.007302) | 0.080060 / 0.038508 (0.041552) | 0.061479 / 0.023109 (0.038370) | 0.327981 / 0.275898 (0.052083) | 0.356930 / 0.323480 (0.033450) | 0.004671 / 0.007986 (-0.003315) | 0.002901 / 0.004328 (-0.001428) | 0.062425 / 0.004250 (0.058174) | 0.046310 / 0.037052 (0.009258) | 0.323657 / 0.258489 (0.065168) | 0.370130 / 0.293841 (0.076289) | 0.027151 / 0.128546 (-0.101395) | 0.007850 / 0.075646 (-0.067797) | 0.262300 / 0.419271 (-0.156971) | 0.045456 / 0.043533 (0.001923) | 0.325569 / 0.255139 (0.070430) | 0.352962 / 0.283200 (0.069762) | 0.020156 / 0.141683 (-0.121527) | 1.429404 / 1.452155 (-0.022750) | 1.615032 / 1.492716 (0.122316) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187309 / 0.018006 (0.169303) | 0.428848 / 0.000490 (0.428358) | 0.003599 / 0.000200 (0.003399) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023260 / 0.037411 (-0.014151) | 0.072467 / 0.014526 (0.057941) | 0.082398 / 0.176557 (-0.094159) | 0.142573 / 0.737135 (-0.594562) | 0.082570 / 0.296338 (-0.213768) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426503 / 0.215209 (0.211294) | 4.267875 / 2.077655 (2.190220) | 2.189762 / 1.504120 (0.685642) | 2.027992 / 1.541195 (0.486798) | 2.053211 / 1.468490 (0.584721) | 0.503850 / 4.584777 (-4.080927) | 3.086444 / 3.745712 (-0.659268) | 3.319492 / 5.269862 (-1.950370) | 2.070714 / 4.565676 (-2.494962) | 0.057591 / 0.424275 (-0.366684) | 0.006407 / 0.007607 (-0.001200) | 0.501145 / 0.226044 (0.275100) | 5.017753 / 2.268929 (2.748825) | 2.643145 / 55.444624 (-52.801479) | 2.327440 / 6.876477 (-4.549037) | 2.460250 / 2.142072 (0.318178) | 0.589397 / 4.805227 (-4.215830) | 0.124948 / 6.500664 (-6.375716) | 0.060450 / 0.075469 (-0.015020) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.279870 / 1.841788 (-0.561918) | 18.115908 / 8.074308 (10.041600) | 13.570032 / 10.191392 (3.378640) | 0.132981 / 0.680424 (-0.547442) | 0.016942 / 0.534201 (-0.517259) | 0.333591 / 0.579283 (-0.245692) | 0.358844 / 0.434364 (-0.075520) | 0.395748 / 0.540337 (-0.144590) | 0.546213 / 1.386936 (-0.840723) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006062 / 0.011353 (-0.005291) | 0.003673 / 0.011008 (-0.007336) | 0.064726 / 0.038508 (0.026218) | 0.061854 / 0.023109 (0.038745) | 0.385343 / 0.275898 (0.109445) | 0.441284 / 0.323480 (0.117805) | 0.004830 / 0.007986 (-0.003156) | 0.002909 / 0.004328 (-0.001420) | 0.063874 / 0.004250 (0.059624) | 0.049331 / 0.037052 (0.012278) | 0.418484 / 0.258489 (0.159995) | 0.451397 / 0.293841 (0.157556) | 0.027665 / 0.128546 (-0.100881) | 0.008088 / 0.075646 (-0.067558) | 0.069625 / 0.419271 (-0.349646) | 0.043437 / 0.043533 (-0.000095) | 0.359789 / 0.255139 (0.104650) | 0.430206 / 0.283200 (0.147007) | 0.022308 / 0.141683 (-0.119375) | 1.461030 / 1.452155 (0.008875) | 1.513683 / 1.492716 (0.020966) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230958 / 0.018006 (0.212952) | 0.417553 / 0.000490 (0.417063) | 0.000802 / 0.000200 (0.000602) | 0.000066 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025421 / 0.037411 (-0.011991) | 0.077156 / 0.014526 (0.062630) | 0.087533 / 0.176557 (-0.089024) | 0.138048 / 0.737135 (-0.599087) | 0.089358 / 0.296338 (-0.206981) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439172 / 0.215209 (0.223963) | 4.409509 / 2.077655 (2.331854) | 2.491270 / 1.504120 (0.987150) | 2.308446 / 1.541195 (0.767252) | 2.378440 / 1.468490 (0.909950) | 0.499834 / 4.584777 (-4.084943) | 3.083168 / 3.745712 (-0.662544) | 2.867543 / 5.269862 (-2.402318) | 1.876354 / 4.565676 (-2.689323) | 0.057092 / 0.424275 (-0.367183) | 0.006955 / 0.007607 (-0.000653) | 0.513799 / 0.226044 (0.287754) | 5.126660 / 2.268929 (2.857731) | 2.917348 / 55.444624 (-52.527277) | 2.508035 / 6.876477 (-4.368441) | 2.698089 / 2.142072 (0.556016) | 0.586828 / 4.805227 (-4.218399) | 0.124740 / 6.500664 (-6.375924) | 0.062276 / 0.075469 (-0.013193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291624 / 1.841788 (-0.550164) | 18.199968 / 8.074308 (10.125660) | 13.888139 / 10.191392 (3.696747) | 0.162955 / 0.680424 (-0.517469) | 0.017343 / 0.534201 (-0.516858) | 0.334683 / 0.579283 (-0.244600) | 0.352708 / 0.434364 (-0.081656) | 0.400629 / 0.540337 (-0.139708) | 0.539497 / 1.386936 (-0.847439) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007500 / 0.011353 (-0.003853) | 0.004498 / 0.011008 (-0.006510) | 0.100239 / 0.038508 (0.061731) | 0.083424 / 0.023109 (0.060315) | 0.366664 / 0.275898 (0.090766) | 0.406641 / 0.323480 (0.083161) | 0.004577 / 0.007986 (-0.003409) | 0.004809 / 0.004328 (0.000480) | 0.076898 / 0.004250 (0.072647) | 0.064021 / 0.037052 (0.026969) | 0.375836 / 0.258489 (0.117347) | 0.413008 / 0.293841 (0.119167) | 0.036010 / 0.128546 (-0.092537) | 0.009655 / 0.075646 (-0.065991) | 0.342595 / 0.419271 (-0.076677) | 0.061846 / 0.043533 (0.018313) | 0.376543 / 0.255139 (0.121404) | 0.395858 / 0.283200 (0.112659) | 0.026792 / 0.141683 (-0.114891) | 1.775569 / 1.452155 (0.323414) | 1.865077 / 1.492716 (0.372360) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221521 / 0.018006 (0.203514) | 0.474604 / 0.000490 (0.474114) | 0.004354 / 0.000200 (0.004154) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032947 / 0.037411 (-0.004464) | 0.100454 / 0.014526 (0.085928) | 0.111955 / 0.176557 (-0.064602) | 0.179752 / 0.737135 (-0.557383) | 0.114282 / 0.296338 (-0.182056) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.458261 / 0.215209 (0.243052) | 4.563536 / 2.077655 (2.485881) | 2.231928 / 1.504120 (0.727808) | 2.036751 / 1.541195 (0.495556) | 2.170413 / 1.468490 (0.701923) | 0.570825 / 4.584777 (-4.013952) | 4.505762 / 3.745712 (0.760050) | 5.033461 / 5.269862 (-0.236401) | 2.704989 / 4.565676 (-1.860687) | 0.067011 / 0.424275 (-0.357264) | 0.008568 / 0.007607 (0.000961) | 0.545151 / 0.226044 (0.319106) | 5.438984 / 2.268929 (3.170055) | 2.771818 / 55.444624 (-52.672806) | 2.393082 / 6.876477 (-4.483395) | 2.467173 / 2.142072 (0.325101) | 0.678849 / 4.805227 (-4.126379) | 0.160480 / 6.500664 (-6.340184) | 0.073681 / 0.075469 (-0.001788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.532272 / 1.841788 (-0.309516) | 22.548741 / 8.074308 (14.474433) | 17.091044 / 10.191392 (6.899652) | 0.172100 / 0.680424 (-0.508324) | 0.022220 / 0.534201 (-0.511981) | 0.467871 / 0.579283 (-0.111412) | 0.491135 / 0.434364 (0.056771) | 0.548433 / 0.540337 (0.008096) | 0.733340 / 1.386936 (-0.653596) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007593 / 0.011353 (-0.003760) | 0.004656 / 0.011008 (-0.006352) | 0.076940 / 0.038508 (0.038431) | 0.085183 / 0.023109 (0.062073) | 0.447178 / 0.275898 (0.171280) | 0.469545 / 0.323480 (0.146065) | 0.006023 / 0.007986 (-0.001962) | 0.003808 / 0.004328 (-0.000520) | 0.076767 / 0.004250 (0.072517) | 0.065713 / 0.037052 (0.028661) | 0.445573 / 0.258489 (0.187084) | 0.481689 / 0.293841 (0.187848) | 0.036893 / 0.128546 (-0.091654) | 0.009976 / 0.075646 (-0.065670) | 0.084443 / 0.419271 (-0.334829) | 0.058829 / 0.043533 (0.015297) | 0.429291 / 0.255139 (0.174152) | 0.454016 / 0.283200 (0.170816) | 0.027289 / 0.141683 (-0.114394) | 1.806786 / 1.452155 (0.354632) | 1.887680 / 1.492716 (0.394964) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241012 / 0.018006 (0.223006) | 0.470629 / 0.000490 (0.470139) | 0.003213 / 0.000200 (0.003013) | 0.000107 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036896 / 0.037411 (-0.000515) | 0.106932 / 0.014526 (0.092406) | 0.120333 / 0.176557 (-0.056223) | 0.186271 / 0.737135 (-0.550865) | 0.121581 / 0.296338 (-0.174758) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.507782 / 0.215209 (0.292573) | 5.062932 / 2.077655 (2.985278) | 2.689539 / 1.504120 (1.185419) | 2.482978 / 1.541195 (0.941784) | 2.561320 / 1.468490 (1.092830) | 0.570664 / 4.584777 (-4.014113) | 4.346051 / 3.745712 (0.600339) | 6.479374 / 5.269862 (1.209513) | 4.096483 / 4.565676 (-0.469194) | 0.067564 / 0.424275 (-0.356711) | 0.009147 / 0.007607 (0.001540) | 0.596059 / 0.226044 (0.370015) | 5.963223 / 2.268929 (3.694295) | 3.201039 / 55.444624 (-52.243585) | 2.816581 / 6.876477 (-4.059896) | 3.047821 / 2.142072 (0.905748) | 0.687749 / 4.805227 (-4.117478) | 0.158174 / 6.500664 (-6.342490) | 0.073329 / 0.075469 (-0.002140) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.601346 / 1.841788 (-0.240441) | 23.712210 / 8.074308 (15.637902) | 16.567272 / 10.191392 (6.375880) | 0.224745 / 0.680424 (-0.455679) | 0.021662 / 0.534201 (-0.512539) | 0.471427 / 0.579283 (-0.107856) | 0.498751 / 0.434364 (0.064387) | 0.572047 / 0.540337 (0.031710) | 0.821868 / 1.386936 (-0.565068) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006371 / 0.011353 (-0.004981) | 0.003749 / 0.011008 (-0.007259) | 0.084155 / 0.038508 (0.045647) | 0.072450 / 0.023109 (0.049340) | 0.308002 / 0.275898 (0.032104) | 0.340471 / 0.323480 (0.016991) | 0.005054 / 0.007986 (-0.002931) | 0.003176 / 0.004328 (-0.001152) | 0.064867 / 0.004250 (0.060616) | 0.054305 / 0.037052 (0.017252) | 0.321047 / 0.258489 (0.062558) | 0.345999 / 0.293841 (0.052158) | 0.030507 / 0.128546 (-0.098039) | 0.008299 / 0.075646 (-0.067347) | 0.287682 / 0.419271 (-0.131590) | 0.052048 / 0.043533 (0.008515) | 0.308322 / 0.255139 (0.053183) | 0.333220 / 0.283200 (0.050020) | 0.022698 / 0.141683 (-0.118985) | 1.474033 / 1.452155 (0.021879) | 1.544790 / 1.492716 (0.052074) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200612 / 0.018006 (0.182606) | 0.450934 / 0.000490 (0.450445) | 0.005383 / 0.000200 (0.005183) | 0.000200 / 0.000054 (0.000145) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027759 / 0.037411 (-0.009652) | 0.080935 / 0.014526 (0.066409) | 0.093041 / 0.176557 (-0.083516) | 0.148643 / 0.737135 (-0.588492) | 0.093463 / 0.296338 (-0.202876) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.381653 / 0.215209 (0.166444) | 3.810699 / 2.077655 (1.733044) | 1.866858 / 1.504120 (0.362738) | 1.716985 / 1.541195 (0.175790) | 1.788071 / 1.468490 (0.319581) | 0.481130 / 4.584777 (-4.103647) | 3.529798 / 3.745712 (-0.215914) | 3.982037 / 5.269862 (-1.287824) | 2.324866 / 4.565676 (-2.240811) | 0.056767 / 0.424275 (-0.367508) | 0.007306 / 0.007607 (-0.000301) | 0.459472 / 0.226044 (0.233428) | 4.602808 / 2.268929 (2.333879) | 2.332014 / 55.444624 (-53.112610) | 2.044858 / 6.876477 (-4.831619) | 2.204165 / 2.142072 (0.062093) | 0.577946 / 4.805227 (-4.227281) | 0.130900 / 6.500664 (-6.369764) | 0.059054 / 0.075469 (-0.016415) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245211 / 1.841788 (-0.596576) | 19.176397 / 8.074308 (11.102089) | 13.995280 / 10.191392 (3.803888) | 0.171743 / 0.680424 (-0.508681) | 0.018038 / 0.534201 (-0.516163) | 0.392338 / 0.579283 (-0.186945) | 0.419370 / 0.434364 (-0.014994) | 0.477829 / 0.540337 (-0.062508) | 0.677409 / 1.386936 (-0.709527) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006513 / 0.011353 (-0.004840) | 0.003984 / 0.011008 (-0.007024) | 0.064516 / 0.038508 (0.026008) | 0.070504 / 0.023109 (0.047395) | 0.384509 / 0.275898 (0.108611) | 0.410564 / 0.323480 (0.087084) | 0.005310 / 0.007986 (-0.002675) | 0.003268 / 0.004328 (-0.001061) | 0.064684 / 0.004250 (0.060433) | 0.055367 / 0.037052 (0.018315) | 0.399108 / 0.258489 (0.140619) | 0.422740 / 0.293841 (0.128900) | 0.031624 / 0.128546 (-0.096922) | 0.008617 / 0.075646 (-0.067030) | 0.070929 / 0.419271 (-0.348342) | 0.049146 / 0.043533 (0.005613) | 0.385492 / 0.255139 (0.130353) | 0.407434 / 0.283200 (0.124234) | 0.021972 / 0.141683 (-0.119711) | 1.496135 / 1.452155 (0.043980) | 1.533739 / 1.492716 (0.041023) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226218 / 0.018006 (0.208211) | 0.443176 / 0.000490 (0.442686) | 0.000376 / 0.000200 (0.000176) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030315 / 0.037411 (-0.007097) | 0.086416 / 0.014526 (0.071890) | 0.097725 / 0.176557 (-0.078831) | 0.150407 / 0.737135 (-0.586728) | 0.099914 / 0.296338 (-0.196424) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409807 / 0.215209 (0.194598) | 4.099086 / 2.077655 (2.021431) | 2.103160 / 1.504120 (0.599040) | 1.927927 / 1.541195 (0.386733) | 1.977751 / 1.468490 (0.509261) | 0.476995 / 4.584777 (-4.107781) | 3.521835 / 3.745712 (-0.223877) | 3.237695 / 5.269862 (-2.032167) | 1.995953 / 4.565676 (-2.569724) | 0.056208 / 0.424275 (-0.368068) | 0.007660 / 0.007607 (0.000053) | 0.483537 / 0.226044 (0.257492) | 4.833974 / 2.268929 (2.565046) | 2.589115 / 55.444624 (-52.855510) | 2.228076 / 6.876477 (-4.648401) | 2.395271 / 2.142072 (0.253198) | 0.577534 / 4.805227 (-4.227694) | 0.131432 / 6.500664 (-6.369232) | 0.060999 / 0.075469 (-0.014471) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356043 / 1.841788 (-0.485745) | 19.470401 / 8.074308 (11.396093) | 14.091266 / 10.191392 (3.899874) | 0.166809 / 0.680424 (-0.513615) | 0.018782 / 0.534201 (-0.515419) | 0.394916 / 0.579283 (-0.184367) | 0.411378 / 0.434364 (-0.022986) | 0.466886 / 0.540337 (-0.073451) | 0.617369 / 1.386936 (-0.769567) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007590 / 0.011353 (-0.003762) | 0.004068 / 0.011008 (-0.006941) | 0.105479 / 0.038508 (0.066971) | 0.085614 / 0.023109 (0.062505) | 0.384325 / 0.275898 (0.108427) | 0.467867 / 0.323480 (0.144387) | 0.004652 / 0.007986 (-0.003333) | 0.005445 / 0.004328 (0.001117) | 0.079604 / 0.004250 (0.075353) | 0.066031 / 0.037052 (0.028978) | 0.426184 / 0.258489 (0.167695) | 0.480712 / 0.293841 (0.186871) | 0.037837 / 0.128546 (-0.090709) | 0.009765 / 0.075646 (-0.065882) | 0.351316 / 0.419271 (-0.067955) | 0.063634 / 0.043533 (0.020101) | 0.420297 / 0.255139 (0.165158) | 0.449169 / 0.283200 (0.165969) | 0.030947 / 0.141683 (-0.110736) | 1.840184 / 1.452155 (0.388029) | 1.934074 / 1.492716 (0.441357) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223483 / 0.018006 (0.205477) | 0.521086 / 0.000490 (0.520596) | 0.000379 / 0.000200 (0.000179) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032011 / 0.037411 (-0.005400) | 0.101474 / 0.014526 (0.086948) | 0.108652 / 0.176557 (-0.067904) | 0.173340 / 0.737135 (-0.563796) | 0.114186 / 0.296338 (-0.182153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478020 / 0.215209 (0.262811) | 4.645400 / 2.077655 (2.567746) | 2.590763 / 1.504120 (1.086643) | 2.383002 / 1.541195 (0.841807) | 2.482550 / 1.468490 (1.014060) | 0.572417 / 4.584777 (-4.012360) | 4.233436 / 3.745712 (0.487724) | 4.858823 / 5.269862 (-0.411038) | 2.838913 / 4.565676 (-1.726764) | 0.070010 / 0.424275 (-0.354265) | 0.009602 / 0.007607 (0.001995) | 0.538735 / 0.226044 (0.312691) | 5.534340 / 2.268929 (3.265411) | 2.915006 / 55.444624 (-52.529619) | 2.625132 / 6.876477 (-4.251345) | 2.537838 / 2.142072 (0.395766) | 0.667870 / 4.805227 (-4.137357) | 0.146330 / 6.500664 (-6.354334) | 0.071631 / 0.075469 (-0.003838) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.594686 / 1.841788 (-0.247101) | 22.311113 / 8.074308 (14.236804) | 17.603983 / 10.191392 (7.412591) | 0.195995 / 0.680424 (-0.484428) | 0.022254 / 0.534201 (-0.511947) | 0.479661 / 0.579283 (-0.099622) | 0.463626 / 0.434364 (0.029262) | 0.483465 / 0.540337 (-0.056873) | 0.676141 / 1.386936 (-0.710795) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006146 / 0.011353 (-0.005207) | 0.004856 / 0.011008 (-0.006152) | 0.067506 / 0.038508 (0.028998) | 0.073968 / 0.023109 (0.050859) | 0.470013 / 0.275898 (0.194115) | 0.479022 / 0.323480 (0.155542) | 0.005972 / 0.007986 (-0.002014) | 0.003846 / 0.004328 (-0.000483) | 0.075141 / 0.004250 (0.070890) | 0.058597 / 0.037052 (0.021544) | 0.481454 / 0.258489 (0.222965) | 0.515634 / 0.293841 (0.221793) | 0.034979 / 0.128546 (-0.093567) | 0.010385 / 0.075646 (-0.065261) | 0.072649 / 0.419271 (-0.346622) | 0.058183 / 0.043533 (0.014650) | 0.462138 / 0.255139 (0.206999) | 0.476093 / 0.283200 (0.192893) | 0.032918 / 0.141683 (-0.108765) | 1.820530 / 1.452155 (0.368375) | 1.626360 / 1.492716 (0.133644) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208970 / 0.018006 (0.190964) | 0.492478 / 0.000490 (0.491988) | 0.005487 / 0.000200 (0.005287) | 0.000140 / 0.000054 (0.000086) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037896 / 0.037411 (0.000484) | 0.089752 / 0.014526 (0.075227) | 0.107445 / 0.176557 (-0.069111) | 0.181260 / 0.737135 (-0.555876) | 0.105700 / 0.296338 (-0.190639) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.495031 / 0.215209 (0.279821) | 4.806939 / 2.077655 (2.729284) | 2.227928 / 1.504120 (0.723808) | 2.067117 / 1.541195 (0.525922) | 2.348982 / 1.468490 (0.880492) | 0.567201 / 4.584777 (-4.017576) | 4.166592 / 3.745712 (0.420880) | 3.654329 / 5.269862 (-1.615533) | 2.331092 / 4.565676 (-2.234584) | 0.062212 / 0.424275 (-0.362063) | 0.008775 / 0.007607 (0.001168) | 0.515413 / 0.226044 (0.289369) | 5.449300 / 2.268929 (3.180371) | 3.206574 / 55.444624 (-52.238050) | 2.600455 / 6.876477 (-4.276022) | 3.041162 / 2.142072 (0.899089) | 0.681899 / 4.805227 (-4.123328) | 0.155400 / 6.500664 (-6.345265) | 0.073933 / 0.075469 (-0.001537) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.572329 / 1.841788 (-0.269459) | 23.638519 / 8.074308 (15.564211) | 17.145663 / 10.191392 (6.954271) | 0.232690 / 0.680424 (-0.447734) | 0.028620 / 0.534201 (-0.505581) | 0.488105 / 0.579283 (-0.091178) | 0.490365 / 0.434364 (0.056001) | 0.599501 / 0.540337 (0.059164) | 0.708101 / 1.386936 (-0.678835) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005947 / 0.011353 (-0.005406) | 0.003577 / 0.011008 (-0.007431) | 0.081631 / 0.038508 (0.043122) | 0.058651 / 0.023109 (0.035541) | 0.342742 / 0.275898 (0.066843) | 0.384130 / 0.323480 (0.060650) | 0.004620 / 0.007986 (-0.003366) | 0.002885 / 0.004328 (-0.001444) | 0.063698 / 0.004250 (0.059448) | 0.048953 / 0.037052 (0.011901) | 0.367880 / 0.258489 (0.109391) | 0.407050 / 0.293841 (0.113209) | 0.027242 / 0.128546 (-0.101305) | 0.007914 / 0.075646 (-0.067733) | 0.262156 / 0.419271 (-0.157116) | 0.044750 / 0.043533 (0.001218) | 0.351613 / 0.255139 (0.096474) | 0.380284 / 0.283200 (0.097084) | 0.020080 / 0.141683 (-0.121603) | 1.498101 / 1.452155 (0.045946) | 1.543608 / 1.492716 (0.050892) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180014 / 0.018006 (0.162008) | 0.436172 / 0.000490 (0.435682) | 0.003694 / 0.000200 (0.003494) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024389 / 0.037411 (-0.013022) | 0.072874 / 0.014526 (0.058348) | 0.083469 / 0.176557 (-0.093088) | 0.144600 / 0.737135 (-0.592536) | 0.084229 / 0.296338 (-0.212110) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.391636 / 0.215209 (0.176427) | 3.906941 / 2.077655 (1.829286) | 1.901944 / 1.504120 (0.397825) | 1.762702 / 1.541195 (0.221507) | 1.817970 / 1.468490 (0.349480) | 0.500345 / 4.584777 (-4.084432) | 3.011351 / 3.745712 (-0.734361) | 4.417763 / 5.269862 (-0.852098) | 2.689744 / 4.565676 (-1.875933) | 0.057765 / 0.424275 (-0.366511) | 0.006412 / 0.007607 (-0.001195) | 0.468156 / 0.226044 (0.242112) | 4.664975 / 2.268929 (2.396047) | 2.323355 / 55.444624 (-53.121270) | 1.984280 / 6.876477 (-4.892197) | 2.165215 / 2.142072 (0.023142) | 0.586950 / 4.805227 (-4.218278) | 0.124363 / 6.500664 (-6.376301) | 0.060702 / 0.075469 (-0.014767) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.238870 / 1.841788 (-0.602917) | 18.587360 / 8.074308 (10.513052) | 13.831674 / 10.191392 (3.640282) | 0.143542 / 0.680424 (-0.536882) | 0.016913 / 0.534201 (-0.517288) | 0.332314 / 0.579283 (-0.246969) | 0.345419 / 0.434364 (-0.088945) | 0.381257 / 0.540337 (-0.159081) | 0.537844 / 1.386936 (-0.849092) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006294 / 0.011353 (-0.005059) | 0.003714 / 0.011008 (-0.007294) | 0.062684 / 0.038508 (0.024176) | 0.063520 / 0.023109 (0.040411) | 0.389591 / 0.275898 (0.113693) | 0.444278 / 0.323480 (0.120798) | 0.004825 / 0.007986 (-0.003160) | 0.003010 / 0.004328 (-0.001318) | 0.062767 / 0.004250 (0.058517) | 0.051739 / 0.037052 (0.014686) | 0.434299 / 0.258489 (0.175810) | 0.452003 / 0.293841 (0.158162) | 0.027375 / 0.128546 (-0.101171) | 0.008135 / 0.075646 (-0.067511) | 0.067401 / 0.419271 (-0.351871) | 0.042752 / 0.043533 (-0.000780) | 0.367633 / 0.255139 (0.112494) | 0.433039 / 0.283200 (0.149840) | 0.021086 / 0.141683 (-0.120597) | 1.488024 / 1.452155 (0.035870) | 1.507767 / 1.492716 (0.015050) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230046 / 0.018006 (0.212040) | 0.428085 / 0.000490 (0.427595) | 0.002188 / 0.000200 (0.001988) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026705 / 0.037411 (-0.010706) | 0.082466 / 0.014526 (0.067940) | 0.089378 / 0.176557 (-0.087179) | 0.147287 / 0.737135 (-0.589849) | 0.090426 / 0.296338 (-0.205913) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430882 / 0.215209 (0.215672) | 4.296224 / 2.077655 (2.218569) | 2.229982 / 1.504120 (0.725862) | 2.048506 / 1.541195 (0.507311) | 2.129514 / 1.468490 (0.661024) | 0.502964 / 4.584777 (-4.081813) | 3.048125 / 3.745712 (-0.697587) | 4.208636 / 5.269862 (-1.061226) | 2.594015 / 4.565676 (-1.971661) | 0.057967 / 0.424275 (-0.366308) | 0.006875 / 0.007607 (-0.000732) | 0.513872 / 0.226044 (0.287828) | 5.126435 / 2.268929 (2.857506) | 2.691278 / 55.444624 (-52.753346) | 2.361723 / 6.876477 (-4.514754) | 2.511213 / 2.142072 (0.369141) | 0.593558 / 4.805227 (-4.211670) | 0.129332 / 6.500664 (-6.371332) | 0.064051 / 0.075469 (-0.011418) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.289049 / 1.841788 (-0.552739) | 18.912363 / 8.074308 (10.838055) | 14.226500 / 10.191392 (4.035108) | 0.131392 / 0.680424 (-0.549032) | 0.016750 / 0.534201 (-0.517451) | 0.330078 / 0.579283 (-0.249205) | 0.347588 / 0.434364 (-0.086776) | 0.383234 / 0.540337 (-0.157103) | 0.510967 / 1.386936 (-0.875969) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005974 / 0.011353 (-0.005379) | 0.003691 / 0.011008 (-0.007317) | 0.079410 / 0.038508 (0.040902) | 0.061769 / 0.023109 (0.038660) | 0.323310 / 0.275898 (0.047412) | 0.354325 / 0.323480 (0.030845) | 0.004794 / 0.007986 (-0.003191) | 0.002899 / 0.004328 (-0.001430) | 0.062104 / 0.004250 (0.057854) | 0.048973 / 0.037052 (0.011921) | 0.326497 / 0.258489 (0.068008) | 0.361347 / 0.293841 (0.067506) | 0.026741 / 0.128546 (-0.101805) | 0.007936 / 0.075646 (-0.067710) | 0.259168 / 0.419271 (-0.160104) | 0.044859 / 0.043533 (0.001327) | 0.319342 / 0.255139 (0.064203) | 0.343711 / 0.283200 (0.060511) | 0.022298 / 0.141683 (-0.119384) | 1.451595 / 1.452155 (-0.000560) | 1.573730 / 1.492716 (0.081014) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.173086 / 0.018006 (0.155080) | 0.432400 / 0.000490 (0.431910) | 0.003739 / 0.000200 (0.003539) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024477 / 0.037411 (-0.012934) | 0.073463 / 0.014526 (0.058937) | 0.083410 / 0.176557 (-0.093146) | 0.144760 / 0.737135 (-0.592376) | 0.084199 / 0.296338 (-0.212140) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.388251 / 0.215209 (0.173042) | 3.875375 / 2.077655 (1.797720) | 1.875515 / 1.504120 (0.371395) | 1.729282 / 1.541195 (0.188087) | 1.784732 / 1.468490 (0.316242) | 0.496985 / 4.584777 (-4.087792) | 3.030276 / 3.745712 (-0.715436) | 2.813192 / 5.269862 (-2.456669) | 1.868647 / 4.565676 (-2.697030) | 0.057376 / 0.424275 (-0.366899) | 0.006463 / 0.007607 (-0.001144) | 0.462153 / 0.226044 (0.236108) | 4.586583 / 2.268929 (2.317654) | 2.287730 / 55.444624 (-53.156894) | 1.972177 / 6.876477 (-4.904299) | 2.151592 / 2.142072 (0.009520) | 0.587169 / 4.805227 (-4.218058) | 0.127063 / 6.500664 (-6.373601) | 0.060297 / 0.075469 (-0.015172) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.267651 / 1.841788 (-0.574136) | 18.426011 / 8.074308 (10.351703) | 14.050470 / 10.191392 (3.859078) | 0.148063 / 0.680424 (-0.532361) | 0.017112 / 0.534201 (-0.517089) | 0.330051 / 0.579283 (-0.249232) | 0.358730 / 0.434364 (-0.075634) | 0.392365 / 0.540337 (-0.147972) | 0.534650 / 1.386936 (-0.852286) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005936 / 0.011353 (-0.005417) | 0.003652 / 0.011008 (-0.007356) | 0.063066 / 0.038508 (0.024558) | 0.060617 / 0.023109 (0.037507) | 0.388293 / 0.275898 (0.112395) | 0.411422 / 0.323480 (0.087942) | 0.004691 / 0.007986 (-0.003295) | 0.002857 / 0.004328 (-0.001472) | 0.064198 / 0.004250 (0.059947) | 0.049124 / 0.037052 (0.012071) | 0.403601 / 0.258489 (0.145112) | 0.413619 / 0.293841 (0.119778) | 0.027279 / 0.128546 (-0.101267) | 0.008072 / 0.075646 (-0.067575) | 0.067890 / 0.419271 (-0.351381) | 0.041866 / 0.043533 (-0.001667) | 0.393438 / 0.255139 (0.138299) | 0.402865 / 0.283200 (0.119666) | 0.023381 / 0.141683 (-0.118302) | 1.496324 / 1.452155 (0.044170) | 1.538080 / 1.492716 (0.045364) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212065 / 0.018006 (0.194059) | 0.410511 / 0.000490 (0.410021) | 0.001236 / 0.000200 (0.001036) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026012 / 0.037411 (-0.011399) | 0.076592 / 0.014526 (0.062066) | 0.085963 / 0.176557 (-0.090594) | 0.137803 / 0.737135 (-0.599332) | 0.087594 / 0.296338 (-0.208745) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434283 / 0.215209 (0.219074) | 4.345478 / 2.077655 (2.267824) | 2.400954 / 1.504120 (0.896834) | 2.282024 / 1.541195 (0.740829) | 2.414247 / 1.468490 (0.945757) | 0.501855 / 4.584777 (-4.082922) | 3.059433 / 3.745712 (-0.686279) | 2.811288 / 5.269862 (-2.458574) | 1.856839 / 4.565676 (-2.708838) | 0.058017 / 0.424275 (-0.366258) | 0.006844 / 0.007607 (-0.000763) | 0.515376 / 0.226044 (0.289332) | 5.148775 / 2.268929 (2.879847) | 2.930807 / 55.444624 (-52.513817) | 2.520532 / 6.876477 (-4.355944) | 2.746299 / 2.142072 (0.604227) | 0.590102 / 4.805227 (-4.215125) | 0.125747 / 6.500664 (-6.374917) | 0.061873 / 0.075469 (-0.013597) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306247 / 1.841788 (-0.535541) | 18.366048 / 8.074308 (10.291740) | 13.855617 / 10.191392 (3.664225) | 0.150124 / 0.680424 (-0.530300) | 0.017189 / 0.534201 (-0.517012) | 0.336285 / 0.579283 (-0.242998) | 0.344985 / 0.434364 (-0.089379) | 0.397973 / 0.540337 (-0.142364) | 0.536142 / 1.386936 (-0.850794) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006401 / 0.011353 (-0.004952) | 0.003789 / 0.011008 (-0.007219) | 0.079516 / 0.038508 (0.041008) | 0.068279 / 0.023109 (0.045170) | 0.295691 / 0.275898 (0.019793) | 0.327208 / 0.323480 (0.003728) | 0.005070 / 0.007986 (-0.002915) | 0.003044 / 0.004328 (-0.001285) | 0.061411 / 0.004250 (0.057161) | 0.053227 / 0.037052 (0.016175) | 0.297368 / 0.258489 (0.038879) | 0.334740 / 0.293841 (0.040899) | 0.029459 / 0.128546 (-0.099087) | 0.008080 / 0.075646 (-0.067566) | 0.267344 / 0.419271 (-0.151927) | 0.049877 / 0.043533 (0.006344) | 0.293853 / 0.255139 (0.038714) | 0.319819 / 0.283200 (0.036620) | 0.022593 / 0.141683 (-0.119089) | 1.459054 / 1.452155 (0.006900) | 1.471250 / 1.492716 (-0.021466) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194326 / 0.018006 (0.176320) | 0.443565 / 0.000490 (0.443075) | 0.003745 / 0.000200 (0.003545) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026640 / 0.037411 (-0.010772) | 0.077630 / 0.014526 (0.063104) | 0.089364 / 0.176557 (-0.087192) | 0.147327 / 0.737135 (-0.589809) | 0.089603 / 0.296338 (-0.206735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.373758 / 0.215209 (0.158549) | 3.746778 / 2.077655 (1.669123) | 1.814991 / 1.504120 (0.310871) | 1.645650 / 1.541195 (0.104455) | 1.690752 / 1.468490 (0.222262) | 0.472117 / 4.584777 (-4.112660) | 3.457346 / 3.745712 (-0.288367) | 3.138869 / 5.269862 (-2.130993) | 1.934924 / 4.565676 (-2.630753) | 0.055709 / 0.424275 (-0.368566) | 0.006680 / 0.007607 (-0.000927) | 0.446874 / 0.226044 (0.220829) | 4.458409 / 2.268929 (2.189480) | 2.253932 / 55.444624 (-53.190693) | 2.007240 / 6.876477 (-4.869237) | 2.081687 / 2.142072 (-0.060386) | 0.563379 / 4.805227 (-4.241848) | 0.128694 / 6.500664 (-6.371970) | 0.057409 / 0.075469 (-0.018060) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212231 / 1.841788 (-0.629556) | 18.519121 / 8.074308 (10.444813) | 13.582243 / 10.191392 (3.390851) | 0.142488 / 0.680424 (-0.537936) | 0.017421 / 0.534201 (-0.516780) | 0.366864 / 0.579283 (-0.212419) | 0.401467 / 0.434364 (-0.032897) | 0.443659 / 0.540337 (-0.096679) | 0.618854 / 1.386936 (-0.768082) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006121 / 0.011353 (-0.005232) | 0.003690 / 0.011008 (-0.007318) | 0.060340 / 0.038508 (0.021832) | 0.067215 / 0.023109 (0.044106) | 0.382846 / 0.275898 (0.106948) | 0.415774 / 0.323480 (0.092294) | 0.004868 / 0.007986 (-0.003118) | 0.003108 / 0.004328 (-0.001221) | 0.060572 / 0.004250 (0.056321) | 0.050453 / 0.037052 (0.013401) | 0.400494 / 0.258489 (0.142005) | 0.424368 / 0.293841 (0.130527) | 0.030279 / 0.128546 (-0.098267) | 0.008151 / 0.075646 (-0.067495) | 0.066707 / 0.419271 (-0.352564) | 0.046118 / 0.043533 (0.002585) | 0.386697 / 0.255139 (0.131558) | 0.410156 / 0.283200 (0.126957) | 0.020688 / 0.141683 (-0.120995) | 1.418162 / 1.452155 (-0.033993) | 1.463057 / 1.492716 (-0.029659) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216081 / 0.018006 (0.198075) | 0.440541 / 0.000490 (0.440051) | 0.000371 / 0.000200 (0.000171) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027763 / 0.037411 (-0.009648) | 0.082316 / 0.014526 (0.067791) | 0.094086 / 0.176557 (-0.082471) | 0.144738 / 0.737135 (-0.592398) | 0.094837 / 0.296338 (-0.201501) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396277 / 0.215209 (0.181068) | 3.958791 / 2.077655 (1.881136) | 2.021367 / 1.504120 (0.517247) | 1.860112 / 1.541195 (0.318917) | 1.886032 / 1.468490 (0.417541) | 0.468536 / 4.584777 (-4.116241) | 3.417950 / 3.745712 (-0.327762) | 4.849991 / 5.269862 (-0.419871) | 2.773935 / 4.565676 (-1.791742) | 0.055813 / 0.424275 (-0.368462) | 0.007053 / 0.007607 (-0.000554) | 0.470167 / 0.226044 (0.244122) | 4.702969 / 2.268929 (2.434041) | 2.474161 / 55.444624 (-52.970464) | 2.171256 / 6.876477 (-4.705220) | 2.315373 / 2.142072 (0.173301) | 0.589195 / 4.805227 (-4.216032) | 0.128237 / 6.500664 (-6.372427) | 0.058641 / 0.075469 (-0.016828) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.292947 / 1.841788 (-0.548841) | 18.851300 / 8.074308 (10.776992) | 14.089764 / 10.191392 (3.898372) | 0.164853 / 0.680424 (-0.515571) | 0.017281 / 0.534201 (-0.516920) | 0.359112 / 0.579283 (-0.220171) | 0.386696 / 0.434364 (-0.047668) | 0.428222 / 0.540337 (-0.112115) | 0.568659 / 1.386936 (-0.818277) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006051 / 0.011353 (-0.005301) | 0.003654 / 0.011008 (-0.007355) | 0.080081 / 0.038508 (0.041572) | 0.062925 / 0.023109 (0.039815) | 0.358097 / 0.275898 (0.082199) | 0.405728 / 0.323480 (0.082248) | 0.005359 / 0.007986 (-0.002627) | 0.002820 / 0.004328 (-0.001508) | 0.063108 / 0.004250 (0.058858) | 0.049627 / 0.037052 (0.012575) | 0.397870 / 0.258489 (0.139381) | 0.437157 / 0.293841 (0.143316) | 0.027707 / 0.128546 (-0.100839) | 0.007911 / 0.075646 (-0.067735) | 0.260991 / 0.419271 (-0.158280) | 0.044771 / 0.043533 (0.001238) | 0.340230 / 0.255139 (0.085091) | 0.384925 / 0.283200 (0.101725) | 0.021369 / 0.141683 (-0.120314) | 1.431439 / 1.452155 (-0.020715) | 1.478794 / 1.492716 (-0.013922) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.182626 / 0.018006 (0.164620) | 0.435551 / 0.000490 (0.435061) | 0.003015 / 0.000200 (0.002815) | 0.000064 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024703 / 0.037411 (-0.012708) | 0.073640 / 0.014526 (0.059114) | 0.084598 / 0.176557 (-0.091959) | 0.145810 / 0.737135 (-0.591325) | 0.085125 / 0.296338 (-0.211213) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.394539 / 0.215209 (0.179330) | 3.945882 / 2.077655 (1.868227) | 1.947166 / 1.504120 (0.443046) | 1.763305 / 1.541195 (0.222111) | 1.816208 / 1.468490 (0.347718) | 0.498880 / 4.584777 (-4.085897) | 3.098283 / 3.745712 (-0.647429) | 2.823474 / 5.269862 (-2.446388) | 1.873993 / 4.565676 (-2.691684) | 0.058097 / 0.424275 (-0.366179) | 0.006488 / 0.007607 (-0.001119) | 0.466711 / 0.226044 (0.240667) | 4.671520 / 2.268929 (2.402592) | 2.363381 / 55.444624 (-53.081243) | 2.052092 / 6.876477 (-4.824385) | 2.209212 / 2.142072 (0.067140) | 0.594650 / 4.805227 (-4.210577) | 0.125604 / 6.500664 (-6.375060) | 0.061511 / 0.075469 (-0.013958) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.226564 / 1.841788 (-0.615224) | 18.583605 / 8.074308 (10.509297) | 13.993091 / 10.191392 (3.801699) | 0.146185 / 0.680424 (-0.534239) | 0.016839 / 0.534201 (-0.517362) | 0.334116 / 0.579283 (-0.245167) | 0.360780 / 0.434364 (-0.073584) | 0.386008 / 0.540337 (-0.154329) | 0.643278 / 1.386936 (-0.743658) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006174 / 0.011353 (-0.005179) | 0.003658 / 0.011008 (-0.007350) | 0.063250 / 0.038508 (0.024742) | 0.063542 / 0.023109 (0.040433) | 0.366845 / 0.275898 (0.090947) | 0.409794 / 0.323480 (0.086314) | 0.005678 / 0.007986 (-0.002308) | 0.003061 / 0.004328 (-0.001268) | 0.063561 / 0.004250 (0.059311) | 0.052648 / 0.037052 (0.015596) | 0.378096 / 0.258489 (0.119607) | 0.410706 / 0.293841 (0.116865) | 0.027668 / 0.128546 (-0.100878) | 0.008045 / 0.075646 (-0.067601) | 0.068290 / 0.419271 (-0.350981) | 0.042602 / 0.043533 (-0.000930) | 0.364976 / 0.255139 (0.109837) | 0.395599 / 0.283200 (0.112400) | 0.022733 / 0.141683 (-0.118950) | 1.522473 / 1.452155 (0.070319) | 1.515891 / 1.492716 (0.023175) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232554 / 0.018006 (0.214547) | 0.420702 / 0.000490 (0.420213) | 0.002161 / 0.000200 (0.001961) | 0.000064 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026276 / 0.037411 (-0.011135) | 0.078504 / 0.014526 (0.063978) | 0.088989 / 0.176557 (-0.087567) | 0.144044 / 0.737135 (-0.593091) | 0.091074 / 0.296338 (-0.205265) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420189 / 0.215209 (0.204980) | 4.189596 / 2.077655 (2.111941) | 2.316425 / 1.504120 (0.812305) | 2.186877 / 1.541195 (0.645682) | 2.259065 / 1.468490 (0.790575) | 0.502827 / 4.584777 (-4.081950) | 3.135266 / 3.745712 (-0.610446) | 2.838808 / 5.269862 (-2.431053) | 1.876519 / 4.565676 (-2.689158) | 0.057802 / 0.424275 (-0.366473) | 0.006824 / 0.007607 (-0.000784) | 0.500213 / 0.226044 (0.274168) | 4.999798 / 2.268929 (2.730869) | 2.627713 / 55.444624 (-52.816911) | 2.344263 / 6.876477 (-4.532214) | 2.415449 / 2.142072 (0.273376) | 0.593082 / 4.805227 (-4.212145) | 0.125787 / 6.500664 (-6.374877) | 0.062699 / 0.075469 (-0.012770) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.308219 / 1.841788 (-0.533569) | 18.703099 / 8.074308 (10.628791) | 13.976234 / 10.191392 (3.784842) | 0.144037 / 0.680424 (-0.536387) | 0.016592 / 0.534201 (-0.517609) | 0.333078 / 0.579283 (-0.246206) | 0.342317 / 0.434364 (-0.092047) | 0.396837 / 0.540337 (-0.143500) | 0.532641 / 1.386936 (-0.854295) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1660 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1660/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1660/comments | https://api.github.com/repos/huggingface/datasets/issues/1660/events | https://github.com/huggingface/datasets/pull/1660 | 775,831,423 | MDExOlB1bGxSZXF1ZXN0NTQ2NDM2MDg1 | 1,660 | add dataset info | [] | closed | false | null | 0 | 2020-12-29T10:58:19Z | 2020-12-30T17:04:30Z | 2020-12-30T17:04:30Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1660/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1660/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1660.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1660",
"merged_at": "2020-12-30T17:04:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1660.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1660"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/2572 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2572/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2572/comments | https://api.github.com/repos/huggingface/datasets/issues/2572/events | https://github.com/huggingface/datasets/issues/2572 | 934,573,767 | MDU6SXNzdWU5MzQ1NzM3Njc= | 2,572 | Support Zstandard compressed files | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 5 | 2021-07-01T08:37:04Z | 2023-01-03T15:34:01Z | 2021-07-05T10:50:27Z | null | Add support for Zstandard compressed files: https://facebook.github.io/zstd/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2572/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2572/timeline | null | completed | null | null | false | [
"I am trying to load a dataset using Hugging Face Datasets load_dataset method. I am getting the value error as show below. Can someone help with this? I am using Windows laptop and Google Colab notebook.\r\n\r\n```\r\n!pip install zstandard\r\nfrom datasets import load_dataset\r\n\r\nlds = load_dataset(\r\n \"json\",\r\n data_files=\"https://the-eye.eu/public/AI/pile_preliminary_components/FreeLaw_Opinions.jsonl.zst\",\r\n split=\"train\",\r\n streaming=True,\r\n)\r\n\r\nWARNING:datasets.builder:Using custom data configuration default-a1d9e8eaedd958cd\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n[<ipython-input-12-5b4fdcb8e6d5>](https://localhost:8080/#) in <module>\r\n 6 )\r\n 7 \r\n----> 8 next(iter(law_dataset_streamed))\r\n\r\n17 frames\r\n[/usr/local/lib/python3.8/dist-packages/fsspec/core.py](https://localhost:8080/#) in get_compression(urlpath, compression)\r\n 485 compression = infer_compression(urlpath)\r\n 486 if compression is not None and compression not in compr:\r\n--> 487 raise ValueError(\"Compression type %s not supported\" % compression)\r\n 488 return compression\r\n 489 \r\n\r\nValueError: Compression type zstd not supported\r\n```",
"I just tried on google colab and this works:\r\n```python\r\n!pip install zstandard\r\n!pip install datasets\r\nfrom datasets import load_dataset\r\n\r\nlds = load_dataset(\r\n \"json\",\r\n data_files=\"https://the-eye.eu/public/AI/pile_preliminary_components/FreeLaw_Opinions.jsonl.zst\",\r\n split=\"train\",\r\n streaming=True,\r\n)\r\nnext(iter(lds))\r\n```\r\n\r\nCan you check that you have a correct installation of `zstandard` ?",
"@lhoestq please note [this](https://github.com/huggingface/datasets/issues/2572#issuecomment-1363718916) is a duplicate of:\r\n- #5388",
"Oh thanks I missed that one !",
"> I just tried on google colab and this works:\r\n> \r\n> ```python\r\n> !pip install zstandard\r\n> !pip install datasets\r\n> from datasets import load_dataset\r\n> \r\n> lds = load_dataset(\r\n> \"json\",\r\n> data_files=\"https://the-eye.eu/public/AI/pile_preliminary_components/FreeLaw_Opinions.jsonl.zst\",\r\n> split=\"train\",\r\n> streaming=True,\r\n> )\r\n> next(iter(lds))\r\n> ```\r\n> \r\n> Can you check that you have a correct installation of `zstandard` ?\r\n\r\nI was downloading datasets first then was doing zstandard installation and that was causing the issue. This was highlighted by the Hugging Face staff and that helped. Now the issue is resolved. Thank you."
] |
https://api.github.com/repos/huggingface/datasets/issues/4666 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4666/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4666/comments | https://api.github.com/repos/huggingface/datasets/issues/4666/events | https://github.com/huggingface/datasets/issues/4666 | 1,299,732,238 | I_kwDODunzps5NeFcO | 4,666 | Issues with concatenating datasets | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-07-09T17:45:14Z | 2022-07-12T17:16:15Z | 2022-07-12T17:16:14Z | null | ## Describe the bug
It is impossible to concatenate datasets if a feature is sequence of dict in one dataset and a dict of sequence in another. But based on the document, it should be automatically converted.
> A [datasets.Sequence](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Sequence) with a internal dictionary feature will be automatically converted into a dictionary of lists. This behavior is implemented to have a compatilbity layer with the TensorFlow Datasets library but may be un-wanted in some cases. If you donโt want this behavior, you can use a python list instead of the [datasets.Sequence](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Sequence).
## Steps to reproduce the bug
```python
from datasets import concatenate_datasets, load_dataset
squad = load_dataset("squad_v2")
squad["train"].to_json("output.jsonl", lines=True)
temp = load_dataset("json", data_files={"train": "output.jsonl"})
concatenate_datasets([temp["train"], squad["train"]])
```
## Expected results
No error executing that code
## Actual results
```
ValueError: The features can't be aligned because the key answers of features {'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None)} has unexpected type - Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None) (expected either {'text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'answer_start': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)} or Value("null").
```
## Environment info
- `datasets` version: 2.3.2
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.8.11
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4666/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4666/timeline | null | completed | null | null | false | [
"Hi! I agree we should improve the features equality checks to account for this particular case. However, your code fails due to `answer_start` having the dtype `int64` instead of `int32` after loading from JSON (it's not possible to embed type precision info into a JSON file; `save_to_disk` does that for arrow files), which would lead to the concatenation error as PyArrow does not support this sort of type promotion. This can be fixed as follows:\r\n```python\r\ntemp = load_dataset(\"json\", data_files={\"train\": \"output.jsonl\"}, features=squad[\"train\"].features)\r\n``` ",
"That makes sense. I totally missed the `int64` and `int32` part. Thanks for pointing it out! Will close this issue for now."
] |
https://api.github.com/repos/huggingface/datasets/issues/3816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3816/comments | https://api.github.com/repos/huggingface/datasets/issues/3816/events | https://github.com/huggingface/datasets/pull/3816 | 1,158,589,913 | PR_kwDODunzps4z5owP | 3,816 | Doc new UI test workflows2 | [] | closed | false | null | 1 | 2022-03-03T15:59:14Z | 2022-10-04T09:35:53Z | 2022-03-03T16:42:15Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3816/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3816/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3816.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3816",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3816.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3816"
} | true | [
"<img src=\"https://www.bikevillastravel.com/cms/static/images/loading.gif\" alt=\"Girl in a jacket\" width=\"50\" >"
] |
https://api.github.com/repos/huggingface/datasets/issues/1380 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1380/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1380/comments | https://api.github.com/repos/huggingface/datasets/issues/1380/events | https://github.com/huggingface/datasets/pull/1380 | 760,320,494 | MDExOlB1bGxSZXF1ZXN0NTM1MTcxOTAw | 1,380 | Add Tatoeba Dataset | [] | closed | false | null | 0 | 2020-12-09T13:16:04Z | 2020-12-10T16:54:28Z | 2020-12-10T16:54:27Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1380/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1380/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1380.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1380",
"merged_at": "2020-12-10T16:54:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1380.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1380"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/1043 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1043/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1043/comments | https://api.github.com/repos/huggingface/datasets/issues/1043/events | https://github.com/huggingface/datasets/pull/1043 | 756,100,717 | MDExOlB1bGxSZXF1ZXN0NTMxNzAwMDQ1 | 1,043 | Add TSAC: Tunisian Sentiment Analysis Corpus | [] | closed | false | null | 0 | 2020-12-03T11:12:35Z | 2020-12-03T13:35:05Z | 2020-12-03T13:32:24Z | null | github: https://github.com/fbougares/TSAC
paper: https://www.aclweb.org/anthology/W17-1307/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1043/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1043/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1043.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1043",
"merged_at": "2020-12-03T13:32:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1043.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1043"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5925/comments | https://api.github.com/repos/huggingface/datasets/issues/5925/events | https://github.com/huggingface/datasets/issues/5925 | 1,741,941,436 | I_kwDODunzps5n0-q8 | 5,925 | Breaking API change in datasets.list_datasets caused by change in HfApi.list_datasets | [] | closed | false | null | 0 | 2023-06-05T14:46:04Z | 2023-06-19T17:22:43Z | 2023-06-19T17:22:43Z | null | ### Describe the bug
Hi all,
after an update of the `datasets` library, we observer crashes in our code. We relied on `datasets.list_datasets` returning a `list`. Now, after the API of the HfApi.list_datasets was changed and it returns a `list` instead of an `Iterable`, the `datasets.list_datasets` now sometimes returns a `list` and somesimes an `Iterable`.
It would be helpful to indicate that by the return type of the `datasets.list_datasets` function.
Thanks,
Martin
### Steps to reproduce the bug
Here, the code crashed after we updated the `datasets` library:
```python
# list_datasets no longer returns a list, which leads to an error when one tries to slice it
for datasets.list_datasets(with_details=True)[:limit]:
...
```
### Expected behavior
It would be helpful to indicate that by the return type of the `datasets.list_datasets` function.
### Environment info
Ubuntu 22.04
datasets 2.12.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5925/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5925/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/434 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/434/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/434/comments | https://api.github.com/repos/huggingface/datasets/issues/434/events | https://github.com/huggingface/datasets/pull/434 | 665,477,638 | MDExOlB1bGxSZXF1ZXN0NDU2NTM3Njgz | 434 | Fixed check for pyarrow | [] | closed | false | null | 1 | 2020-07-25T00:16:53Z | 2020-07-25T06:36:34Z | 2020-07-25T06:36:34Z | null | Fix check for pyarrow in __init__.py. Previously would raise an error for pyarrow >= 1.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/434/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/434/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/434.diff",
"html_url": "https://github.com/huggingface/datasets/pull/434",
"merged_at": "2020-07-25T06:36:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/434.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/434"
} | true | [
"Great, thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/323/comments | https://api.github.com/repos/huggingface/datasets/issues/323/events | https://github.com/huggingface/datasets/pull/323 | 647,521,308 | MDExOlB1bGxSZXF1ZXN0NDQxNTMxOTY3 | 323 | Add package path to sys when downloading package as github archive | [] | closed | false | null | 2 | 2020-06-29T16:46:01Z | 2020-07-30T14:00:23Z | 2020-07-30T14:00:23Z | null | This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh)
@thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importlib` but there might be a more elegant method.
This PR fixes https://github.com/huggingface/nlp/issues/305 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/323/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/323/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/323.diff",
"html_url": "https://github.com/huggingface/datasets/pull/323",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/323.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/323"
} | true | [
"Sorry for the long diff, everything after the imports comes from `black` for code quality :/ ",
" I think it's fine and I can't think of another way to make the import work anyways.\r\n\r\nMaybe we can have the `sys.path` behavior inside `prepare_module` instead ? Currently it seems to come out of nowhere in the code ^^'\r\nWe could check if external imports have a `__init__.py` and if it is the case then we can add to directory to the `PYTHONPATH`"
] |
https://api.github.com/repos/huggingface/datasets/issues/5437 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5437/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5437/comments | https://api.github.com/repos/huggingface/datasets/issues/5437/events | https://github.com/huggingface/datasets/issues/5437 | 1,536,837,144 | I_kwDODunzps5bmkYY | 5,437 | Can't load png dataset with 4 channel (RGBA) | [] | closed | false | null | 3 | 2023-01-17T18:22:27Z | 2023-01-18T20:20:15Z | 2023-01-18T20:20:15Z | null | I try to create dataset which contains about 9000 png images 64x64 in size, and they are all 4-channel (RGBA). When trying to use load_dataset() then a dataset is created from only 2 images. What exactly interferes I can not understand. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5437/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5437/timeline | null | completed | null | null | false | [
"Hi! Can you please share the directory structure of your image folder and the `load_dataset` call? We decode images with Pillow, and Pillow supports RGBA PNGs, so this shouldn't be a problem.\r\n\r\n",
"> Hi! Can you please share the directory structure of your image folder and the `load_dataset` call? We decode images with Pillow, and Pillow supports RGBA PNGs, so this shouldn't be a problem.\n> \n> \n\nI have only 1 folder that I use in the load_dataset function with the name \"IMGDATA\" and all my 9000 images are located in this folder.\n`\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"IMGDATA\")\n`\nAt the same time, using another data set with images consisting of 3 RGB channels, everything works",
"Okay, I figured out what was wrong. When uploading my dataset via Google Drive, the images broke and Pillow couldn't open them. As a result, I solved the problem by downloading the ZIP archive"
] |
https://api.github.com/repos/huggingface/datasets/issues/6009 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6009/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6009/comments | https://api.github.com/repos/huggingface/datasets/issues/6009/events | https://github.com/huggingface/datasets/pull/6009 | 1,792,059,808 | PR_kwDODunzps5U1mus | 6,009 | Fix cast for dictionaries with no keys | [] | closed | false | null | 3 | 2023-07-06T18:48:14Z | 2023-07-07T14:13:00Z | 2023-07-07T14:01:13Z | null | Fix #5677 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6009/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6009/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6009.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6009",
"merged_at": "2023-07-07T14:01:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6009.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6009"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006961 / 0.011353 (-0.004392) | 0.004390 / 0.011008 (-0.006618) | 0.103249 / 0.038508 (0.064741) | 0.048084 / 0.023109 (0.024975) | 0.351213 / 0.275898 (0.075315) | 0.416918 / 0.323480 (0.093439) | 0.005539 / 0.007986 (-0.002446) | 0.003555 / 0.004328 (-0.000774) | 0.079306 / 0.004250 (0.075055) | 0.066937 / 0.037052 (0.029884) | 0.382601 / 0.258489 (0.124112) | 0.406125 / 0.293841 (0.112284) | 0.032269 / 0.128546 (-0.096277) | 0.009133 / 0.075646 (-0.066514) | 0.354449 / 0.419271 (-0.064822) | 0.068978 / 0.043533 (0.025445) | 0.352314 / 0.255139 (0.097175) | 0.390398 / 0.283200 (0.107199) | 0.025640 / 0.141683 (-0.116043) | 1.553865 / 1.452155 (0.101710) | 1.601292 / 1.492716 (0.108576) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208310 / 0.018006 (0.190303) | 0.440076 / 0.000490 (0.439586) | 0.000363 / 0.000200 (0.000163) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029173 / 0.037411 (-0.008238) | 0.111323 / 0.014526 (0.096797) | 0.123001 / 0.176557 (-0.053556) | 0.180180 / 0.737135 (-0.556955) | 0.125804 / 0.296338 (-0.170534) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419919 / 0.215209 (0.204710) | 4.194515 / 2.077655 (2.116860) | 1.881234 / 1.504120 (0.377114) | 1.672914 / 1.541195 (0.131720) | 1.723102 / 1.468490 (0.254612) | 0.543584 / 4.584777 (-4.041193) | 3.822477 / 3.745712 (0.076765) | 1.837946 / 5.269862 (-3.431915) | 1.094975 / 4.565676 (-3.470701) | 0.066788 / 0.424275 (-0.357487) | 0.011689 / 0.007607 (0.004082) | 0.520983 / 0.226044 (0.294938) | 5.209245 / 2.268929 (2.940316) | 2.392916 / 55.444624 (-53.051708) | 2.060042 / 6.876477 (-4.816434) | 2.162291 / 2.142072 (0.020219) | 0.668472 / 4.805227 (-4.136755) | 0.144373 / 6.500664 (-6.356291) | 0.066152 / 0.075469 (-0.009318) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251256 / 1.841788 (-0.590532) | 15.161338 / 8.074308 (7.087030) | 14.416133 / 10.191392 (4.224741) | 0.166145 / 0.680424 (-0.514279) | 0.018168 / 0.534201 (-0.516033) | 0.433364 / 0.579283 (-0.145919) | 0.417484 / 0.434364 (-0.016880) | 0.502543 / 0.540337 (-0.037794) | 0.602904 / 1.386936 (-0.784032) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006946 / 0.011353 (-0.004407) | 0.004248 / 0.011008 (-0.006761) | 0.079707 / 0.038508 (0.041199) | 0.046226 / 0.023109 (0.023117) | 0.375864 / 0.275898 (0.099966) | 0.430740 / 0.323480 (0.107260) | 0.006222 / 0.007986 (-0.001764) | 0.003474 / 0.004328 (-0.000854) | 0.079622 / 0.004250 (0.075372) | 0.066666 / 0.037052 (0.029613) | 0.379487 / 0.258489 (0.120998) | 0.423002 / 0.293841 (0.129161) | 0.032836 / 0.128546 (-0.095710) | 0.008976 / 0.075646 (-0.066670) | 0.086578 / 0.419271 (-0.332693) | 0.055651 / 0.043533 (0.012118) | 0.360787 / 0.255139 (0.105648) | 0.384265 / 0.283200 (0.101065) | 0.025350 / 0.141683 (-0.116333) | 1.547880 / 1.452155 (0.095725) | 1.605850 / 1.492716 (0.113134) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184227 / 0.018006 (0.166220) | 0.442071 / 0.000490 (0.441582) | 0.002887 / 0.000200 (0.002687) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031923 / 0.037411 (-0.005488) | 0.119093 / 0.014526 (0.104568) | 0.128704 / 0.176557 (-0.047853) | 0.187065 / 0.737135 (-0.550070) | 0.134135 / 0.296338 (-0.162204) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455731 / 0.215209 (0.240522) | 4.562911 / 2.077655 (2.485256) | 2.247431 / 1.504120 (0.743311) | 2.053346 / 1.541195 (0.512151) | 2.049611 / 1.468490 (0.581121) | 0.546069 / 4.584777 (-4.038708) | 3.821852 / 3.745712 (0.076140) | 3.358497 / 5.269862 (-1.911364) | 1.667697 / 4.565676 (-2.897979) | 0.067968 / 0.424275 (-0.356307) | 0.012344 / 0.007607 (0.004737) | 0.550864 / 0.226044 (0.324820) | 5.496867 / 2.268929 (3.227939) | 2.680031 / 55.444624 (-52.764594) | 2.328673 / 6.876477 (-4.547804) | 2.436754 / 2.142072 (0.294682) | 0.681195 / 4.805227 (-4.124033) | 0.148761 / 6.500664 (-6.351904) | 0.067716 / 0.075469 (-0.007753) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.353798 / 1.841788 (-0.487990) | 15.992965 / 8.074308 (7.918657) | 14.051539 / 10.191392 (3.860147) | 0.181087 / 0.680424 (-0.499337) | 0.018653 / 0.534201 (-0.515548) | 0.433499 / 0.579283 (-0.145784) | 0.428845 / 0.434364 (-0.005519) | 0.501100 / 0.540337 (-0.039238) | 0.603666 / 1.386936 (-0.783270) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010983 / 0.011353 (-0.000370) | 0.005630 / 0.011008 (-0.005378) | 0.109967 / 0.038508 (0.071458) | 0.101580 / 0.023109 (0.078471) | 0.490205 / 0.275898 (0.214307) | 0.534653 / 0.323480 (0.211173) | 0.008365 / 0.007986 (0.000379) | 0.004317 / 0.004328 (-0.000012) | 0.082429 / 0.004250 (0.078179) | 0.080556 / 0.037052 (0.043504) | 0.494627 / 0.258489 (0.236138) | 0.544189 / 0.293841 (0.250348) | 0.049419 / 0.128546 (-0.079127) | 0.014033 / 0.075646 (-0.061613) | 0.370406 / 0.419271 (-0.048866) | 0.083468 / 0.043533 (0.039935) | 0.463829 / 0.255139 (0.208690) | 0.507516 / 0.283200 (0.224316) | 0.053266 / 0.141683 (-0.088417) | 1.778680 / 1.452155 (0.326525) | 1.916616 / 1.492716 (0.423900) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267646 / 0.018006 (0.249640) | 0.617824 / 0.000490 (0.617334) | 0.007720 / 0.000200 (0.007520) | 0.000139 / 0.000054 (0.000085) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034464 / 0.037411 (-0.002948) | 0.113626 / 0.014526 (0.099100) | 0.118911 / 0.176557 (-0.057646) | 0.194701 / 0.737135 (-0.542434) | 0.123431 / 0.296338 (-0.172907) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.606073 / 0.215209 (0.390863) | 6.086393 / 2.077655 (4.008738) | 2.568712 / 1.504120 (1.064593) | 2.260801 / 1.541195 (0.719606) | 2.411798 / 1.468490 (0.943307) | 0.876433 / 4.584777 (-3.708344) | 5.521280 / 3.745712 (1.775568) | 5.969722 / 5.269862 (0.699861) | 3.671028 / 4.565676 (-0.894649) | 0.097082 / 0.424275 (-0.327193) | 0.011354 / 0.007607 (0.003747) | 0.713842 / 0.226044 (0.487798) | 7.291172 / 2.268929 (5.022244) | 3.315272 / 55.444624 (-52.129352) | 2.777487 / 6.876477 (-4.098990) | 3.025449 / 2.142072 (0.883377) | 1.014115 / 4.805227 (-3.791112) | 0.217928 / 6.500664 (-6.282736) | 0.083097 / 0.075469 (0.007627) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.640060 / 1.841788 (-0.201728) | 25.342172 / 8.074308 (17.267864) | 22.776510 / 10.191392 (12.585118) | 0.227300 / 0.680424 (-0.453124) | 0.032233 / 0.534201 (-0.501968) | 0.507547 / 0.579283 (-0.071736) | 0.647044 / 0.434364 (0.212680) | 0.607019 / 0.540337 (0.066682) | 0.823548 / 1.386936 (-0.563388) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009576 / 0.011353 (-0.001777) | 0.009322 / 0.011008 (-0.001687) | 0.087184 / 0.038508 (0.048676) | 0.100795 / 0.023109 (0.077685) | 0.492138 / 0.275898 (0.216240) | 0.528386 / 0.323480 (0.204906) | 0.006689 / 0.007986 (-0.001296) | 0.004735 / 0.004328 (0.000406) | 0.085519 / 0.004250 (0.081269) | 0.072648 / 0.037052 (0.035595) | 0.496068 / 0.258489 (0.237579) | 0.549634 / 0.293841 (0.255793) | 0.049709 / 0.128546 (-0.078837) | 0.015077 / 0.075646 (-0.060569) | 0.099445 / 0.419271 (-0.319826) | 0.068080 / 0.043533 (0.024547) | 0.500426 / 0.255139 (0.245287) | 0.531437 / 0.283200 (0.248238) | 0.053176 / 0.141683 (-0.088507) | 1.827942 / 1.452155 (0.375787) | 1.914286 / 1.492716 (0.421570) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.247658 / 0.018006 (0.229652) | 0.590805 / 0.000490 (0.590315) | 0.005319 / 0.000200 (0.005119) | 0.000165 / 0.000054 (0.000110) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036993 / 0.037411 (-0.000418) | 0.112944 / 0.014526 (0.098419) | 0.118964 / 0.176557 (-0.057593) | 0.194867 / 0.737135 (-0.542269) | 0.120816 / 0.296338 (-0.175523) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.638062 / 0.215209 (0.422853) | 6.246785 / 2.077655 (4.169130) | 2.957779 / 1.504120 (1.453659) | 2.739118 / 1.541195 (1.197924) | 2.795362 / 1.468490 (1.326872) | 0.890532 / 4.584777 (-3.694245) | 5.508198 / 3.745712 (1.762486) | 5.222315 / 5.269862 (-0.047547) | 3.152731 / 4.565676 (-1.412946) | 0.098344 / 0.424275 (-0.325931) | 0.008800 / 0.007607 (0.001193) | 0.757889 / 0.226044 (0.531845) | 7.545715 / 2.268929 (5.276787) | 3.694536 / 55.444624 (-51.750088) | 3.112872 / 6.876477 (-3.763605) | 3.182358 / 2.142072 (1.040285) | 1.028171 / 4.805227 (-3.777056) | 0.215223 / 6.500664 (-6.285441) | 0.085856 / 0.075469 (0.010387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.853138 / 1.841788 (0.011350) | 25.939672 / 8.074308 (17.865364) | 23.118029 / 10.191392 (12.926637) | 0.250599 / 0.680424 (-0.429825) | 0.029942 / 0.534201 (-0.504259) | 0.508748 / 0.579283 (-0.070535) | 0.593966 / 0.434364 (0.159602) | 0.605499 / 0.540337 (0.065162) | 0.863827 / 1.386936 (-0.523109) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/685 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/685/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/685/comments | https://api.github.com/repos/huggingface/datasets/issues/685/events | https://github.com/huggingface/datasets/pull/685 | 711,182,185 | MDExOlB1bGxSZXF1ZXN0NDk0ODg1NjIz | 685 | Add features parameter to CSV | [] | closed | false | null | 0 | 2020-09-29T14:43:36Z | 2020-09-30T08:39:56Z | 2020-09-30T08:39:54Z | null | Add support for the `features` parameter when loading a csv dataset:
```python
from datasets import load_dataset, Features
features = Features({...})
csv_dataset = load_dataset("csv", data_files=["path/to/my/file.csv"], features=features)
```
I added tests to make sure that it is also compatible with the caching system
Fix #623 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/685/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/685/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/685.diff",
"html_url": "https://github.com/huggingface/datasets/pull/685",
"merged_at": "2020-09-30T08:39:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/685.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/685"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5442/comments | https://api.github.com/repos/huggingface/datasets/issues/5442/events | https://github.com/huggingface/datasets/issues/5442 | 1,550,084,450 | I_kwDODunzps5cZGli | 5,442 | OneDrive Integrations with HF Datasets | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 2 | 2023-01-19T23:12:08Z | 2023-02-24T16:17:51Z | 2023-02-24T16:17:51Z | null | ### Feature request
First of all , I would like to thank all community who are developed DataSet storage and make it free available
How to integrate our Onedrive account or any other possible storage clouds (like google drive,...) with the **HF** datasets section.
For example, if I have **50GB** on my **Onedrive** account and I want to move between drive and Hugging face repo or vis versa
### Motivation
make the dataset section more flexible with other possible storage
like the integration between Google Collab and Google drive the storage
### Your contribution
Can be done using Hugging face CLI | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5442/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5442/timeline | null | completed | null | null | false | [
"Hi! \r\n\r\nWe use [`fsspec`](https://github.com/fsspec/filesystem_spec) to integrate with storage providers. You can find more info (and the usage examples) in [our docs](https://huggingface.co/docs/datasets/v2.8.0/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage).\r\n\r\n[`gdrivefs`](https://github.com/fsspec/gdrivefs) makes it possible to use Google Drive as a storage service in Datasets, but this is not the case for OneDrive, since its[ Python SDK](https://github.com/OneDrive/onedrive-sdk-python) is not integrated with `fsspec`. Can you please request the integration with `fsspec` in their repo to address this limitation?",
"I'm closing this issue as implementing a fsspec-compliant OneDrive filesystem is not our responsibility."
] |
https://api.github.com/repos/huggingface/datasets/issues/746 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/746/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/746/comments | https://api.github.com/repos/huggingface/datasets/issues/746/events | https://github.com/huggingface/datasets/pull/746 | 725,627,235 | MDExOlB1bGxSZXF1ZXN0NTA2ODMzNDMw | 746 | dataset(ngt): add ngt dataset initial loading script | [] | closed | false | null | 0 | 2020-10-20T14:04:58Z | 2021-03-23T06:19:38Z | 2021-03-23T06:19:38Z | null | Currently only making the paths to the annotation ELAN (eaf) file and videos available.
This is the first accessible way to download this dataset, which is not manual file-by-file.
Only downloading the necessary files, the annotation files are very small, 20MB for all of them, but the video files are large, 100GB in total, saved in `mpg` format.
I do not intend to actually store these as an uncompressed array of frames, because it will be huge.
Future updates may add pose estimation files for all videos, making it easier to work with this data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/746/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/746/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/746.diff",
"html_url": "https://github.com/huggingface/datasets/pull/746",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/746.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/746"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4748 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4748/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4748/comments | https://api.github.com/repos/huggingface/datasets/issues/4748/events | https://github.com/huggingface/datasets/pull/4748 | 1,318,874,913 | PR_kwDODunzps48JTEb | 4,748 | Add image classification processing guide | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 1 | 2022-07-27T00:11:11Z | 2022-07-27T17:28:21Z | 2022-07-27T17:16:12Z | null | This PR follows up on #4710 to separate the object detection and image classification guides. It expands a little more on the original guide to include a more complete example of loading and transforming a whole dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4748/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4748/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4748.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4748",
"merged_at": "2022-07-27T17:16:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4748.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4748"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1994 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1994/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1994/comments | https://api.github.com/repos/huggingface/datasets/issues/1994/events | https://github.com/huggingface/datasets/issues/1994 | 822,871,238 | MDU6SXNzdWU4MjI4NzEyMzg= | 1,994 | not being able to get wikipedia es language | [] | open | false | null | 8 | 2021-03-05T08:31:48Z | 2021-03-11T20:46:21Z | null | null | Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/load.py", line 612, in load_dataset
ignore_verifications=ignore_verifications,
File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/builder.py", line 527, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/builder.py", line 1050, in _download_and_prepare
"\n\t`{}`".format(usage_example)
datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')`
thanks @lhoestq for any suggestion/help | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1994/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1994/timeline | null | null | null | null | false | [
"@lhoestq I really appreciate if you could help me providiing processed datasets, I do not really have access to enough resources to run the apache-beam and need to run the codes on these datasets. Only en/de/fr currently works, but I need all the languages more or less. thanks ",
"Hi @dorost1234, I think I can help you a little. Iโve processed some Wikipedia datasets (Spanish inclusive) using the HF/datasets library during recent research.\r\n\r\n@lhoestq Could you help me to upload these preprocessed datasets to Huggingface's repositories? To be more precise, I've built datasets from the following languages using the 20201201 dumps: Spanish, Portuguese, Russian, French, Japanese, Chinese, and Turkish. Process these datasets have high costs that most of the community can't afford. I think these preprocessed datasets I have could be helpful for someone without access to high-resource machines to process Wikipedia's dumps like @dorost1234\r\n\r\n",
"Thank you so much @jonatasgrosman , I greatly appreciate your help with them. \r\nYes, I unfortunately does not have access to a good resource and need it for my\r\nresearch. I greatly appreciate @lhoestq your help with uploading the processed datasets in huggingface datasets. This would be really helpful for some users like me with not access to high-memory GPU resources.\r\n\r\nthank you both so much again.\r\n\r\nOn Sat, Mar 6, 2021 at 12:55 AM Jonatas Grosman <notifications@github.com>\r\nwrote:\r\n\r\n> Hi @dorost1234 <https://github.com/dorost1234>, I think I can help you a\r\n> little. Iโve processed some Wikipedia datasets (Spanish inclusive) using\r\n> the HF/datasets library during recent research.\r\n>\r\n> @lhoestq <https://github.com/lhoestq> Could you help me to upload these\r\n> preprocessed datasets to Huggingface's repositories? To be more precise,\r\n> I've built datasets from the following languages using the 20201201 dumps:\r\n> Spanish, Portuguese, Russian, French, Japanese, Chinese, and Turkish.\r\n> Process these datasets have high costs that most of the community can't\r\n> afford. I think these preprocessed datasets I have could be helpful for\r\n> someone without access to high-resource machines to process Wikipedia's\r\n> dumps like @dorost1234 <https://github.com/dorost1234>\r\n>\r\n> โ\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/1994#issuecomment-791798195>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMWMK5GFJFU3ACCJFUDTCFVNZANCNFSM4YUZIF4A>\r\n> .\r\n>\r\n",
"Hi @dorost1234, so sorry, but looking at my files here, I figure out that I've preprocessed files using the HF/datasets for all the languages previously listed by me (Portuguese, Russian, French, Japanese, Chinese, and Turkish) except the Spanish (on my tests I've used the [wikicorpus](https://www.cs.upc.edu/~nlp/wikicorpus/) instead).\r\n\r\nOnly with the Spanish Wikipedia's dump, I had the same `KeyError: '000nbsp'` problem already reported here https://github.com/huggingface/datasets/issues/577\r\n\r\nSo nowadays, even with access to a high resource machine, you couldn't be able to get Wikipedia's Spanish data using the HF/datasets :(\r\n\r\n\r\n\r\n\r\n",
"Thanks a lot for the information and help. This would be great to have\nthese datasets.\n@lhoestq <https://github.com/lhoestq> Do you know a way I could get\nsmaller amount of these data like 1 GBtype of each language to deal with\ncomputatioanl requirements? thanks\n\nOn Sat, Mar 6, 2021 at 5:36 PM Jonatas Grosman <notifications@github.com>\nwrote:\n\n> Hi @dorost1234 <https://github.com/dorost1234>, so sorry, but looking at\n> my files here, I figure out that I've preprocessed files using the\n> HF/datasets for all the languages previously listed by me (Portuguese,\n> Russian, French, Japanese, Chinese, and Turkish) except the Spanish (on my\n> tests I've used the wikicorpus <https://www.cs.upc.edu/~nlp/wikicorpus/>\n> instead).\n>\n> Only with the Spanish Wikipedia's dump, I had the same KeyError: '000nbsp'\n> problem already reported here #577\n> <https://github.com/huggingface/datasets/issues/577>\n>\n> So nowadays, even with access to a high resource machine, you couldn't be\n> able to get Wikipedia's Spanish data using the HF/datasets :(\n>\n> โ\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/1994#issuecomment-791985546>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMWMO7WOHWLOROPD6Q3TCJKXPANCNFSM4YUZIF4A>\n> .\n>\n",
"Hi ! As mentioned above the Spanish configuration have parsing issues from `mwparserfromhell`. I haven't tested with the latest `mwparserfromhell` >=0.6 though. Which version of `mwparserfromhell` are you using ?\r\n\r\n> @lhoestq Could you help me to upload these preprocessed datasets to Huggingface's repositories? To be more precise, I've built datasets from the following languages using the 20201201 dumps: Spanish, Portuguese, Russian, French, Japanese, Chinese, and Turkish. Process these datasets have high costs that most of the community can't afford. I think these preprocessed datasets I have could be helpful for someone without access to high-resource machines to process Wikipedia's dumps like @dorost1234\r\n\r\nThat would be awesome ! Feel free to ping me on slack so we can put the processed wikipedia files on google storage with the other ones we've already preprocessed.\r\n\r\n> Do you know a way I could get smaller amount of these data like 1 GBtype of each language to deal with computatioanl requirements? thanks\r\n\r\nI'd suggest to copy the [wikipedia.py](https://github.com/huggingface/datasets/blob/master/datasets/wikipedia/wikipedia.py) to a new script `custom_wikipedia.py` and modify it to only download and process only a subset of the raw data files.\r\nYou can for example replace [this line](https://github.com/huggingface/datasets/blob/64e59fc45ca2134218b3e42e83fddddbe840ff74/datasets/wikipedia/wikipedia.py#L446) by:\r\n```python\r\n if total_bytes >= (1 << 30): # stop if the total amount of data is >= 1GB\r\n break\r\n else:\r\n xml_urls.append(_base_url(lang) + fname)\r\n```\r\n\r\nThen you can load your custom wikipedia dataset with\r\n```python\r\nload_dataset(\"path/to/my/custom_wikipedia.py\", f\"{date}.{language}\")\r\n```",
"Hi @lhoestq!\r\n\r\n> Hi ! As mentioned above the Spanish configuration have parsing issues from mwparserfromhell. I haven't tested with the latest mwparserfromhell >=0.6 though. Which version of mwparserfromhell are you using ?\r\n\r\nI'm using the latest mwparserfromhell version (0.6)\r\n\r\n> That would be awesome ! Feel free to ping me on slack so we can put the processed wikipedia files on google storage with the other ones we've already preprocessed.\r\n\r\nI'll ping you there ๐ ",
"Thank you so much @jonatasgrosman and @lhoestq this would be a great help. I am really thankful to you both and to wonderful Huggingface dataset library allowing us to train models at scale."
] |
https://api.github.com/repos/huggingface/datasets/issues/2979 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2979/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2979/comments | https://api.github.com/repos/huggingface/datasets/issues/2979/events | https://github.com/huggingface/datasets/issues/2979 | 1,009,634,147 | I_kwDODunzps48Lctj | 2,979 | ValueError when computing f1 metric with average None | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-09-28T11:34:53Z | 2021-10-01T14:17:38Z | 2021-10-01T14:17:38Z | null | ## Describe the bug
When I try to compute the f1 score for each class in a multiclass classification problem, I get a ValueError. The same happens with recall and precision. I traced the error to the `.item()` in these scripts, which is probably there for the other averages. E.g. from f1.py:
```python
return {
"f1": f1_score(
references,
predictions,
labels=labels,
pos_label=pos_label,
average=average,
sample_weight=sample_weight,
).item(),
}
```
Since the result is an array with more than one item, the `.item()` throws the error. I didn't submit a PR because this might be needed for the other averages, I'm not very familiar with the library
## Steps to reproduce the bug
```python
from datasets import load_metric
metric = load_metric("f1")
metric.add_batch(predictions=[2,34,1,34,1,2,3], references=[23,52,1,3,523,5,8])
metric.compute(average=None)
```
## Expected results
`array([0.66666667, 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. ])`
## Actual results
ValueError: can only convert an array of size 1 to a Python scalar
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.5
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2979/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2979/timeline | null | completed | null | null | false | [
"Hi @asofiaoliveira, thanks for reporting.\r\n\r\nI'm fixing it."
] |
https://api.github.com/repos/huggingface/datasets/issues/1784 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1784/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1784/comments | https://api.github.com/repos/huggingface/datasets/issues/1784/events | https://github.com/huggingface/datasets/issues/1784 | 794,659,174 | MDU6SXNzdWU3OTQ2NTkxNzQ= | 1,784 | JSONDecodeError on JSON with multiple lines | [] | closed | false | null | 2 | 2021-01-27T00:19:22Z | 2021-01-31T08:47:18Z | 2021-01-31T08:47:18Z | null | Hello :),
I have been trying to load data using a JSON file. Based on the [docs](https://huggingface.co/docs/datasets/loading_datasets.html#json-files), the following format is supported:
```json
{"key1":11, "key2":12, "key3":13}
{"key1":21, "key2":22, "key3":23}
```
But, when I try loading a dataset with the same format, I get a JSONDecodeError : `JSONDecodeError: Extra data: line 2 column 1 (char 7142)`. Now, this is expected when using `json` to load a JSON file. But I was wondering if there are any special arguments to pass when using `load_dataset` as the docs suggest that this format is supported.
When I convert the JSON file to a list of dictionaries format, I get AttributeError: `AttributeError: 'list' object has no attribute 'keys'`. So, I can't convert them to list of dictionaries either.
Please let me know :)
Thanks,
Gunjan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1784/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1784/timeline | null | completed | null | null | false | [
"Hi !\r\n\r\nThe `json` dataset script does support this format. For example loading a dataset with this format works on my side:\r\n```json\r\n{\"key1\":11, \"key2\":12, \"key3\":13}\r\n{\"key1\":21, \"key2\":22, \"key3\":23}\r\n```\r\n\r\nCan you show the full stacktrace please ? Also which version of datasets and pyarrow are you using ?\r\n\r\n",
"Hi Quentin!\r\n\r\nI apologize for bothering you. There was some issue with my pyarrow version as far as I understand. I don't remember the exact version I was using as I didn't check it.\r\n\r\nI repeated it with `datasets 1.2.1` and `pyarrow 2.0.0` and it worked.\r\n\r\nClosing this issue. Again, sorry for the bother.\r\n\r\nThanks,\r\nGunjan"
] |
https://api.github.com/repos/huggingface/datasets/issues/4007 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4007/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4007/comments | https://api.github.com/repos/huggingface/datasets/issues/4007/events | https://github.com/huggingface/datasets/issues/4007 | 1,179,381,021 | I_kwDODunzps5GS-0d | 4,007 | set_format does not work with multi dimension tensor | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2022-03-24T11:27:43Z | 2022-03-30T07:28:57Z | 2022-03-24T14:39:29Z | null | ## Describe the bug
set_format only transforms the last dimension of a multi-dimension list to tensor
## Steps to reproduce the bug
```python
import torch
from datasets import Dataset
ds = Dataset.from_dict({"A": [torch.rand((2, 2))]})
# ds = Dataset.from_dict({"A": [np.random.rand(2, 2)]}) # => same result
ds = ds.with_format("torch")
print(ds[0])
```
## Expected results
```
{'A': [tensor([[0.6689, 0.1516], [0.1403, 0.5567]])]}
```
## Actual results
```
{'A': [tensor([0.6689, 0.1516]), tensor([0.1403, 0.5567])]}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- datasets version: 2.0.0
- Platform: Mac OSX
- Python version: 3.8.12
- PyArrow version: 7.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4007/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4007/timeline | null | completed | null | null | false | [
"Hi! Use the `ArrayXD` feature type (where X is the number of dimensions) to get correctly formated tensors. So in your case, define the dataset as follows :\r\n```python\r\nds = Dataset.from_dict({\"A\": [torch.rand((2, 2))]}, features=Features({\"A\": Array2D(shape=(2, 2), dtype=\"float32\")}))\r\n```\r\n",
"Hi @mariosasko I'm facing the same issue and the only work around I've found so far is to convert my `DatasetDict` to a dictionary and then create new objects with `Dataset.from_dict`.\r\n```\r\ndataset = load_dataset(\"my_dataset.py\")\r\ndataset = dataset.map(lambda example: blabla(example))\r\ndict_dataset_test = dataset[\"test\"].to_dict()\r\n...\r\ndataset_test = Dataset.from_dict(dict_dataset_test, features=Features(features))\r\n```\r\nHowever, converting a `Dataset` object to a dict takes quite a lot of time and memory... Is there a way to directly create an `Array2D` without having to transform the original `Dataset` to a dict?",
"Hi! Yes, you can directly pass the `Features` dictionary as `features` in `map` to cast the column to `Array2D`:\r\n```python\r\ndataset = dataset.map(lambda example: blabla(example), features=Features(features))\r\n```\r\nOr you can use `cast` after `map` to do that:\r\n```python\r\ndataset = dataset.map(lambda example: blabla(example))\r\ndataset = dataset.cast(Features(features))\r\n```",
"Fantastic thank you @mariosasko\r\nThe first option you suggested is indeed way faster ๐ "
] |
https://api.github.com/repos/huggingface/datasets/issues/274 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/274/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/274/comments | https://api.github.com/repos/huggingface/datasets/issues/274/events | https://github.com/huggingface/datasets/issues/274 | 639,156,625 | MDU6SXNzdWU2MzkxNTY2MjU= | 274 | PG-19 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 4 | 2020-06-15T21:02:26Z | 2020-07-06T15:35:02Z | 2020-07-06T15:35:02Z | null | Hi, and thanks for all your open-sourced work, as always!
I was wondering if you would be open to adding PG-19 to your collection of datasets. https://github.com/deepmind/pg19 It is often used for benchmarking long-range language modeling. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/274/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/274/timeline | null | completed | null | null | false | [
"Sounds good! Do you want to give it a try?",
"Ok, I'll see if I can figure it out tomorrow!",
"Got around to this today, and so far so good, I'm able to download and load pg19 locally. However, I think there may be an issue with the dummy data, and testing in general.\r\n\r\nThe problem lies in the fact that each book from pg19 actually resides as its own text file in a google cloud folder that denotes the split, where the book id is the name of the text file. https://console.cloud.google.com/storage/browser/deepmind-gutenberg/train/ I don't believe there's anywhere else (even in the supplied metadata), where the mapping of id -> split can be found.\r\n\r\nTherefore I end up making a network call `tf.io.gfile.listdir` to get all the files within each of the split directories. https://github.com/lucidrains/nlp/commit/adbacbd85decc80db2347d0882e7dab4faa6fd03#diff-cece8f166a85dd927caf574ba303d39bR78\r\n\r\nDoes this network call need to be eventually stubbed out for testing?",
"Ohh nevermind, I think I can use `download_custom` here with `listdir` as the custom function. Ok, I'll keep trying to make the dummy data work!"
] |
https://api.github.com/repos/huggingface/datasets/issues/1029 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1029/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1029/comments | https://api.github.com/repos/huggingface/datasets/issues/1029/events | https://github.com/huggingface/datasets/pull/1029 | 755,767,616 | MDExOlB1bGxSZXF1ZXN0NTMxNDE2NzE4 | 1,029 | Add PEC | [] | closed | false | null | 5 | 2020-12-03T02:46:08Z | 2020-12-04T10:58:19Z | 2020-12-03T16:15:06Z | null | A persona-based empathetic conversation dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1029/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1029/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1029.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1029",
"merged_at": "2020-12-03T16:15:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1029.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1029"
} | true | [
"I'm a bit frustrated now to get this right.",
"Hey @zhongpeixiang!\r\nReally nice addition here!\r\n\r\nDid you officially joined the sprint by posting [on the forum thread](https://discuss.huggingface.co/t/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library/2176) and joining our slack?\r\nI can't seem to find you there! Should I add you directly with your gmail address?",
"> Hey @zhongpeixiang!\r\n> Really nice addition here!\r\n> \r\n> Did you officially joined the sprint by posting [on the forum thread](https://discuss.huggingface.co/t/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library/2176) and joining our slack?\r\n> I can't seem to find you there! Should I add you directly with your gmail address?\r\n\r\nThank you for the invitation. This initiative is awesome. Sadly Iโm occupied by my thesis writing this month. Good luck ๐ค",
"As you want @zhongpeixiang (I was maybe not clear but that just mean that by posting on the forum thread that you participated in the current event you will get a special gift (a tee-shirt) for the contribution that you have already done here :-) Nothing more to do)",
"> As you want @zhongpeixiang (I was maybe not clear but that just mean that by posting on the forum thread that you participated in the current event you will get a special gift (a tee-shirt) for the contribution that you have already done here :-) Nothing more to do)\r\n\r\nOh, I misunderstood the post. I'm glad to join."
] |
https://api.github.com/repos/huggingface/datasets/issues/3825 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3825/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3825/comments | https://api.github.com/repos/huggingface/datasets/issues/3825/events | https://github.com/huggingface/datasets/pull/3825 | 1,159,802,345 | PR_kwDODunzps4z9p4b | 3,825 | Update version and date in Wikipedia dataset | [] | closed | false | null | 1 | 2022-03-04T16:05:27Z | 2022-03-04T17:24:37Z | 2022-03-04T17:24:36Z | null | CC: @geohci | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3825/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3825/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3825.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3825",
"merged_at": "2022-03-04T17:24:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3825.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3825"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3825). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/2705 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2705/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2705/comments | https://api.github.com/repos/huggingface/datasets/issues/2705/events | https://github.com/huggingface/datasets/issues/2705 | 950,488,583 | MDU6SXNzdWU5NTA0ODg1ODM= | 2,705 | 404 not found error on loading WIKIANN dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-07-22T09:55:50Z | 2021-07-23T08:07:32Z | 2021-07-23T08:07:32Z | null | ## Describe the bug
Unable to retreive wikiann English dataset
## Steps to reproduce the bug
```python
from datasets import list_datasets, load_dataset, list_metrics, load_metric
WIKIANN = load_dataset("wikiann","en")
```
## Expected results
Colab notebook should display successful download status
## Actual results
FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/12h3qqog6q4bjve/panx_dataset.tar?dl=1
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.10.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.11
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2705/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2705/timeline | null | completed | null | null | false | [
"Hi @ronbutan, thanks for reporting.\r\n\r\nYou are right: we have recently found that the link to the original PAN-X dataset (also called WikiANN), hosted at Dropbox, is no longer working.\r\n\r\nWe have opened an issue in the GitHub repository of the original dataset (afshinrahimi/mmner#4) and we have also contacted the author by email to ask if they are planning to fix this issue. See the details here: https://github.com/huggingface/datasets/issues/2691#issuecomment-885463027\r\n\r\nI close this issue because it is the same as in #2691. Feel free to subscribe to that other issue to be informed about any updates."
] |
https://api.github.com/repos/huggingface/datasets/issues/5611 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5611/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5611/comments | https://api.github.com/repos/huggingface/datasets/issues/5611/events | https://github.com/huggingface/datasets/pull/5611 | 1,611,197,906 | PR_kwDODunzps5LW2Lx | 5,611 | add Dataset.to_list | [] | closed | false | null | 3 | 2023-03-06T11:21:57Z | 2023-03-27T13:34:19Z | 2023-03-27T13:26:38Z | null | close https://github.com/huggingface/datasets/issues/5606
This PR is for adding the `Dataset.to_list` method.
Thank you in advance.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5611/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5611/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5611.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5611",
"merged_at": "2023-03-27T13:26:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5611.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5611"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, thanks for working on this! `Table.to_pylist` requires PyArrow 7.0+, and our minimal version requirement is 6.0, so we need to bump the version requirement to avoid CI failure. I'll do this in a separate PR.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006857 / 0.011353 (-0.004496) | 0.004711 / 0.011008 (-0.006297) | 0.098332 / 0.038508 (0.059824) | 0.028547 / 0.023109 (0.005438) | 0.307647 / 0.275898 (0.031749) | 0.334891 / 0.323480 (0.011411) | 0.005252 / 0.007986 (-0.002734) | 0.003495 / 0.004328 (-0.000833) | 0.075529 / 0.004250 (0.071279) | 0.042167 / 0.037052 (0.005114) | 0.308509 / 0.258489 (0.050020) | 0.348294 / 0.293841 (0.054453) | 0.032042 / 0.128546 (-0.096504) | 0.011684 / 0.075646 (-0.063962) | 0.321740 / 0.419271 (-0.097531) | 0.057725 / 0.043533 (0.014193) | 0.309431 / 0.255139 (0.054292) | 0.326818 / 0.283200 (0.043618) | 0.093261 / 0.141683 (-0.048422) | 1.475344 / 1.452155 (0.023190) | 1.563952 / 1.492716 (0.071236) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205056 / 0.018006 (0.187050) | 0.421656 / 0.000490 (0.421166) | 0.004167 / 0.000200 (0.003967) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023935 / 0.037411 (-0.013476) | 0.097220 / 0.014526 (0.082695) | 0.104942 / 0.176557 (-0.071615) | 0.170339 / 0.737135 (-0.566796) | 0.107556 / 0.296338 (-0.188782) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424509 / 0.215209 (0.209300) | 4.223637 / 2.077655 (2.145982) | 2.090700 / 1.504120 (0.586580) | 1.902537 / 1.541195 (0.361343) | 1.981192 / 1.468490 (0.512701) | 0.695272 / 4.584777 (-3.889505) | 3.570169 / 3.745712 (-0.175544) | 1.885007 / 5.269862 (-3.384854) | 1.162828 / 4.565676 (-3.402848) | 0.084956 / 0.424275 (-0.339319) | 0.012818 / 0.007607 (0.005210) | 0.534395 / 0.226044 (0.308351) | 5.354318 / 2.268929 (3.085389) | 2.436875 / 55.444624 (-53.007749) | 2.111365 / 6.876477 (-4.765112) | 2.232874 / 2.142072 (0.090802) | 0.804703 / 4.805227 (-4.000524) | 0.152406 / 6.500664 (-6.348258) | 0.066926 / 0.075469 (-0.008543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198621 / 1.841788 (-0.643166) | 13.907491 / 8.074308 (5.833183) | 14.356286 / 10.191392 (4.164894) | 0.140714 / 0.680424 (-0.539710) | 0.016440 / 0.534201 (-0.517761) | 0.380868 / 0.579283 (-0.198415) | 0.396004 / 0.434364 (-0.038360) | 0.448275 / 0.540337 (-0.092062) | 0.537818 / 1.386936 (-0.849118) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006789 / 0.011353 (-0.004564) | 0.004652 / 0.011008 (-0.006356) | 0.076449 / 0.038508 (0.037941) | 0.028389 / 0.023109 (0.005280) | 0.378644 / 0.275898 (0.102746) | 0.423870 / 0.323480 (0.100391) | 0.005824 / 0.007986 (-0.002162) | 0.003398 / 0.004328 (-0.000931) | 0.075575 / 0.004250 (0.071324) | 0.039656 / 0.037052 (0.002604) | 0.370072 / 0.258489 (0.111583) | 0.441812 / 0.293841 (0.147971) | 0.031817 / 0.128546 (-0.096729) | 0.011701 / 0.075646 (-0.063946) | 0.085759 / 0.419271 (-0.333513) | 0.042328 / 0.043533 (-0.001205) | 0.364103 / 0.255139 (0.108964) | 0.413910 / 0.283200 (0.130711) | 0.090871 / 0.141683 (-0.050812) | 1.505749 / 1.452155 (0.053594) | 1.608555 / 1.492716 (0.115839) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212533 / 0.018006 (0.194527) | 0.404519 / 0.000490 (0.404030) | 0.000373 / 0.000200 (0.000174) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024849 / 0.037411 (-0.012562) | 0.100769 / 0.014526 (0.086243) | 0.110450 / 0.176557 (-0.066107) | 0.161715 / 0.737135 (-0.575420) | 0.113599 / 0.296338 (-0.182739) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436780 / 0.215209 (0.221571) | 4.387103 / 2.077655 (2.309448) | 2.081942 / 1.504120 (0.577822) | 1.873661 / 1.541195 (0.332466) | 1.947718 / 1.468490 (0.479228) | 0.696434 / 4.584777 (-3.888343) | 3.405300 / 3.745712 (-0.340412) | 1.897388 / 5.269862 (-3.372474) | 1.169969 / 4.565676 (-3.395707) | 0.083085 / 0.424275 (-0.341190) | 0.012480 / 0.007607 (0.004873) | 0.535635 / 0.226044 (0.309591) | 5.364462 / 2.268929 (3.095533) | 2.531168 / 55.444624 (-52.913457) | 2.184324 / 6.876477 (-4.692153) | 2.228613 / 2.142072 (0.086541) | 0.807127 / 4.805227 (-3.998100) | 0.151971 / 6.500664 (-6.348693) | 0.068430 / 0.075469 (-0.007039) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306401 / 1.841788 (-0.535387) | 14.479552 / 8.074308 (6.405244) | 14.428398 / 10.191392 (4.237006) | 0.159505 / 0.680424 (-0.520919) | 0.016856 / 0.534201 (-0.517344) | 0.375197 / 0.579283 (-0.204086) | 0.384328 / 0.434364 (-0.050036) | 0.440688 / 0.540337 (-0.099650) | 0.524998 / 1.386936 (-0.861938) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1/comments | https://api.github.com/repos/huggingface/datasets/issues/1/events | https://github.com/huggingface/datasets/pull/1 | 599,457,467 | MDExOlB1bGxSZXF1ZXN0NDAzMDk1NDYw | 1 | changing nlp.bool to nlp.bool_ | [] | closed | false | null | 0 | 2020-04-14T10:18:02Z | 2022-10-04T09:31:40Z | 2020-04-14T12:01:40Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1",
"merged_at": "2020-04-14T12:01:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/5256 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5256/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5256/comments | https://api.github.com/repos/huggingface/datasets/issues/5256/events | https://github.com/huggingface/datasets/pull/5256 | 1,452,652,586 | PR_kwDODunzps5DFDY0 | 5,256 | fix wrong print | [] | closed | false | null | 0 | 2022-11-17T03:54:26Z | 2022-11-18T11:05:32Z | 2022-11-18T11:05:32Z | null | print `encoded_dataset.column_names` not `dataset.column_names` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5256/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5256/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5256.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5256",
"merged_at": "2022-11-18T11:05:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5256.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5256"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/759 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/759/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/759/comments | https://api.github.com/repos/huggingface/datasets/issues/759/events | https://github.com/huggingface/datasets/issues/759 | 729,046,916 | MDU6SXNzdWU3MjkwNDY5MTY= | 759 | (Load dataset failure) ConnectionError: Couldnโt reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | [] | closed | false | null | 15 | 2020-10-25T15:34:57Z | 2021-08-04T18:10:09Z | 2021-08-04T18:10:09Z | null | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(โcnn_dailymailโ, โ3.0.0โ, split=โtrainโ)
And I got the following errors.
Traceback (most recent call last):
File โtest.pyโ, line 7, in
test_dataset = load_dataset(โcnn_dailymailโ, โ3.0.0โ, split=โtestโ)
File โC:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.pyโ, line 589, in load_dataset
module_path, hash = prepare_module(
File โC:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.pyโ, line 268, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File โC:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.pyโ, line 300, in cached_path
output_path = get_from_cache(
File โC:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.pyโ, line 475, in get_from_cache
raise ConnectionError(โCouldnโt reach {}โ.format(url))
ConnectionError: Couldnโt reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
How can I fix this ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/759/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/759/timeline | null | completed | null | null | false | [
"Are you running the script on a machine with an internet connection ?",
"Yes , I can browse the url through Google Chrome.",
"Does this HEAD request return 200 on your machine ?\r\n```python\r\nimport requests \r\nrequests.head(\"https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py\")\r\n```\r\n\r\nIf it returns 200, could you try again to load the dataset ?",
"Thank you very much for your response.\r\nWhen I run \r\n``` \r\nimport requests \r\nrequests.head(\"https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py\")\r\n```\r\nIt returns 200.\r\n\r\nAnd I try again to load the dataset. I got the following errors again. \r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\load.py\", line 608, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 475, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"C:\\Users\\666666\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\cnn_dailymail\\0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602\\cnn_dailymail.py\", line 253, in _split_generators\r\n dl_paths = dl_manager.download_and_extract(_DL_URLS)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\download_manager.py\", line 254, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\download_manager.py\", line 175, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 224, in map_nested\r\n mapped = [\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 225, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 163, in _single_map_nested\r\n return function(data_struct)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 300, in cached_path\r\n output_path = get_from_cache(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 475, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\r\n\r\nConnection error happened but the url was different.\r\n\r\nI add the following code.\r\n```\r\nrequests.head(\"https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\")\r\n```\r\nThis didn't return 200\r\nIt returned like this:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connection.py\", line 159, in _new_conn\r\n conn = connection.create_connection(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\util\\connection.py\", line 84, in create_connection\r\n raise err\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\util\\connection.py\", line 74, in create_connection\r\n sock.connect(sa)\r\nTimeoutError: [WinError 10060] \r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connectionpool.py\", line 670, in urlopen\r\n httplib_response = self._make_request(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connectionpool.py\", line 381, in _make_request\r\n self._validate_conn(conn)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connectionpool.py\", line 978, in _validate_conn\r\n conn.connect()\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connection.py\", line 309, in connect\r\n conn = self._new_conn()\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connection.py\", line 171, in _new_conn\r\n raise NewConnectionError(\r\nurllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x000001F6060618E0>: Failed to establish a new connection: [WinError 10060] ",
"Is google drive blocked on your network ?\r\nFor me \r\n```python\r\nrequests.head(\"https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\")\r\n```\r\nreturns 200",
"I can browse the google drive through google chrome. It's weird. I can download the dataset through google drive manually.",
"Could you try to update `requests` maybe ?\r\nIt works with 2.23.0 on my side",
"My ```requests``` is 2.24.0 . It still can't return 200.",
"Is it possible I download the dataset manually from google drive and use it for further test ? How can I do this ? I want to reproduce the model in this link https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16. But I can't download the dataset through load_dataset method . I have tried many times and the connection error always happens .\r\n",
"The head request should definitely work, not sure what's going on on your side.\r\nIf you find a way to make it work, please post it here since other users might encounter the same issue.\r\n\r\nIf you don't manage to fix it you can use `load_dataset` on google colab and then save it using `dataset.save_to_disk(\"path/to/dataset\")`.\r\nThen you can download the directory on your machine and do\r\n```python\r\nfrom datasets import load_from_disk\r\ndataset = load_from_disk(\"path/to/local/dataset\")\r\n```",
"Hi\r\nI want to know if this problem has been solved because I encountered a similar issue. Thanks.\r\n`train_data = datasets.load_dataset(\"xsum\", `split=\"train\")`\r\n`ConnectionError:` Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/xsum/xsum.py`",
"Hi @smile0925 ! Do you have an internet connection ? Are you using some kind of proxy that may block the access to this file ?\r\n\r\nOtherwise you can try to update `datasets` since we introduced retries for http requests in the 1.2.0 version\r\n```\r\npip install --upgrade datasets\r\n```\r\nLet me know if that helps.",
"Hi @lhoestq \r\nOh, may be you are right. I find that my server uses some kind of proxy that block the access to this file.\r\n\r\n\r\n",
"> Hi @lhoestq\r\n> Oh, may be you are right. I find that my server uses some kind of proxy that block the access to this file.\r\n> \r\n\r\nI have the same problem, have you solved it? Many thanks",
"Hi @ZhengxiangShi \r\nYou can first try whether your network can access these files. I need to use VPN to access these files, so I download the files that cannot be accessed to the local in advance, and then use them in the code. Like this,\r\n`train_data = datasets.load_dataset(\"xsum.py\", split=\"train\")`"
] |
https://api.github.com/repos/huggingface/datasets/issues/1073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1073/comments | https://api.github.com/repos/huggingface/datasets/issues/1073/events | https://github.com/huggingface/datasets/pull/1073 | 756,468,034 | MDExOlB1bGxSZXF1ZXN0NTMyMDA4NjIw | 1,073 | Add DialogRE dataset | [] | closed | false | null | 0 | 2020-12-03T18:56:40Z | 2020-12-20T13:34:48Z | 2020-12-04T13:41:51Z | null | Adding the [DialogRE](https://github.com/nlpdata/dialogre) dataset Version 2.
- All tests passed successfully. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1073/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1073/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1073.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1073",
"merged_at": "2020-12-04T13:41:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1073.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1073"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3473/comments | https://api.github.com/repos/huggingface/datasets/issues/3473/events | https://github.com/huggingface/datasets/issues/3473 | 1,086,937,610 | I_kwDODunzps5AyVoK | 3,473 | Iterating over a vision dataset doesn't decode the images | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] | closed | false | null | 9 | 2021-12-22T15:26:32Z | 2021-12-27T14:13:21Z | 2021-12-23T15:21:57Z | null | ## Describe the bug
If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned.
## Steps to reproduce the bug
```python
from datasets import load_dataset
import PIL
mnist = load_dataset("mnist", split="train")
first_image = mnist[0]["image"]
assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # passes
first_image = next(iter(mnist))["image"]
assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # fails
```
## Expected results
The image should be decoded, as a PIL Image
## Actual results
We get a dictionary
```
{'bytes': b'\x89PNG\r\n\x1a\n\x00..., 'path': None}
```
## Environment info
- `datasets` version: 1.17.1.dev0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.2
- PyArrow version: 6.0.0
The bug also exists in 1.17.0
## Investigation
I think the issue is that decoding is disabled in `__iter__`:
https://github.com/huggingface/datasets/blob/dfe5b73387c5e27de6a16b0caeb39d3b9ded66d6/src/datasets/arrow_dataset.py#L1651-L1661
Do you remember why it was disabled in the first place @albertvillanova ?
Also cc @mariosasko @NielsRogge
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3473/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3473/timeline | null | completed | null | null | false | [
"As discussed, I remember I set `decoded=False` here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed.",
"> I set decoded=False here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed\r\n\r\nhttps://github.com/huggingface/datasets/pull/3430 will add more control to decoding, so I think it's OK to enable decoding in `__iter__` for now. After we merge the linked PR, the user can easily disable it again.",
"@mariosasko I wonder why there is no issue in `Audio` feature with decoding disabled in `__iter__`, whereas there is in `Image` feature.\r\n\r\nEnabling decoding in `__iter__` will make fail Audio regressions tests: https://github.com/huggingface/datasets/runs/4608657230?check_suite_focus=true\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_not_decoded\r\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_decoded\r\n========================= 2 failed, 15 passed in 8.37s =========================",
"Please also note that the regression tests were implemented in accordance with the specifications:\r\n- when doing a `map` (wich calls `__iter__`) of a function that doesn't access the audio field, the decoding should be disabled; this is why the decoding is disabled in `__iter__` (and only enabled in `__getitem__`).",
"> I wonder why there is no issue in Audio feature with decoding disabled in __iter__, whereas there is in Image feature.\r\n\r\n@albertvillanova Not sure if I understand this part. Currently, both the Image and the Audio feature don't decode data in `__iter__`, so their behavior is aligned there.\r\n",
"Therefore, this is not an issue, neither for Audio nor Image feature.\r\n\r\nCould you please elaborate more on the expected use case? @lhoestq @NielsRogge \r\n\r\nThe expected use cases (in accordance with the specs: see #2324):\r\n- decoding should be enabled when accessing a specific item (`__getitem__`)\r\n- decoding should be disabled while iterating (`__iter__`) to allow preprocessing of non-audio/image features (like label or text, for example) using `.map`\r\n- decoding should be enabled in a `.map` only if the `.map` function accesses the audio/image feature (implemented using `LazyDict`)",
"For me it's not an issue, actually. I just (mistakenly) tried to iterate over a PyTorch Dataset instead of a PyTorch DataLoader, \r\n\r\ni.e. I did this:\r\n\r\n`batch = next(iter(train_ds)) `\r\n\r\nwhereas I actually wanted to do\r\n\r\n`batch = next(iter(train_dataloader))`\r\n\r\nand then it turned out that in the first case, the image was a string of bytes rather than a Pillow image, hence Quentin opened an issue.",
"Thanks @NielsRogge for the context.\r\n\r\nSo IMO everything is working as expected.\r\n\r\nI'm closing this issue. Feel free to reopen it again if further changes of the specs should be addressed.",
"Thanks for the details :)\r\n\r\nI still think that it's unexpected to get different results when doing\r\n```python\r\nfor i in range(len(dataset)):\r\n sample = dataset[i]\r\n```\r\nand\r\n```python\r\nfor sample in dataset:\r\n pass\r\n```\r\neven though I understand that if you don't need to decode the data, then decoding image or audio data when iterating is a waste of time and resources.\r\n\r\nBut in this case users can still drop the column that need decoding to get the full speed back no ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/1119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1119/comments | https://api.github.com/repos/huggingface/datasets/issues/1119/events | https://github.com/huggingface/datasets/pull/1119 | 757,156,781 | MDExOlB1bGxSZXF1ZXN0NTMyNTc5ODA5 | 1,119 | Add Google Great Code Dataset | [] | closed | false | null | 0 | 2020-12-04T14:46:28Z | 2020-12-06T17:33:14Z | 2020-12-06T17:33:13Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1119/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1119/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1119.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1119",
"merged_at": "2020-12-06T17:33:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1119.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1119"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/5020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5020/comments | https://api.github.com/repos/huggingface/datasets/issues/5020/events | https://github.com/huggingface/datasets/pull/5020 | 1,384,684,078 | PR_kwDODunzps4_istJ | 5,020 | Fix URLs of sbu_captions dataset | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 1 | 2022-09-24T14:00:33Z | 2022-09-28T07:20:20Z | 2022-09-28T07:18:23Z | null | Forbidden
You don't have permission to access /~vicente/sbucaptions/sbu-captions-all.tar.gz on this server.
Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request.
Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.2k-fips PHP/5.4.16 mod_fcgid/2.3.9 mod_wsgi/3.4 Python/2.7.5 mod_perl/2.0.11 Perl/v5.16.3 Server at [www.cs.virginia.edu](mailto:csroot@virginia.edu) Port 443 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5020/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5020/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5020.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5020",
"merged_at": "2022-09-28T07:18:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5020.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5020"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3934 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3934/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3934/comments | https://api.github.com/repos/huggingface/datasets/issues/3934/events | https://github.com/huggingface/datasets/pull/3934 | 1,170,292,492 | PR_kwDODunzps40ftiC | 3,934 | Create MAUVE metric card | [] | closed | false | null | 1 | 2022-03-15T21:36:07Z | 2022-03-18T17:38:14Z | 2022-03-18T17:34:13Z | null | Proposing a MAUVE metric card | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3934/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3934/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3934.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3934",
"merged_at": "2022-03-18T17:34:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3934.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3934"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/867 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/867/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/867/comments | https://api.github.com/repos/huggingface/datasets/issues/867/events | https://github.com/huggingface/datasets/pull/867 | 745,773,955 | MDExOlB1bGxSZXF1ZXN0NTIzMjc4MjI4 | 867 | Fix some metrics feature types | [] | closed | false | null | 0 | 2020-11-18T15:46:11Z | 2020-11-19T17:35:58Z | 2020-11-19T17:35:57Z | null | Replace `int` feature type to `int32` since `int` is not a pyarrow dtype in those metrics:
- accuracy
- precision
- recall
- f1
I also added the sklearn citation and used keyword arguments to remove future warnings | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/867/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/867/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/867.diff",
"html_url": "https://github.com/huggingface/datasets/pull/867",
"merged_at": "2020-11-19T17:35:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/867.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/867"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4971 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4971/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4971/comments | https://api.github.com/repos/huggingface/datasets/issues/4971/events | https://github.com/huggingface/datasets/pull/4971 | 1,370,319,516 | PR_kwDODunzps4-zk3g | 4,971 | Preserve non-`input_colums` in `Dataset.map` if `input_columns` are specified | [] | closed | false | null | 1 | 2022-09-12T18:08:24Z | 2022-09-13T13:51:08Z | 2022-09-13T13:48:45Z | null | Currently, if the `input_columns` list in `Dataset.map` is specified, the columns not in that list are dropped after the `map` transform.
This makes the behavior inconsistent with `IterableDataset.map`.
(It seems this issue was introduced by mistake in https://github.com/huggingface/datasets/pull/2246)
Fix https://github.com/huggingface/datasets/issues/4858 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4971/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4971/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4971.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4971",
"merged_at": "2022-09-13T13:48:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4971.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4971"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/6068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6068/comments | https://api.github.com/repos/huggingface/datasets/issues/6068/events | https://github.com/huggingface/datasets/pull/6068 | 1,820,106,952 | PR_kwDODunzps5WUkZi | 6,068 | fix tqdm lock deletion | [] | closed | false | null | 5 | 2023-07-25T11:17:25Z | 2023-07-25T15:29:39Z | 2023-07-25T15:17:50Z | null | related to https://github.com/huggingface/datasets/issues/6066 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6068/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6068/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6068.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6068",
"merged_at": "2023-07-25T15:17:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6068.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6068"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006573 / 0.011353 (-0.004780) | 0.004014 / 0.011008 (-0.006994) | 0.084999 / 0.038508 (0.046491) | 0.074965 / 0.023109 (0.051855) | 0.313294 / 0.275898 (0.037396) | 0.349678 / 0.323480 (0.026198) | 0.005401 / 0.007986 (-0.002585) | 0.003401 / 0.004328 (-0.000927) | 0.065363 / 0.004250 (0.061112) | 0.057159 / 0.037052 (0.020107) | 0.313260 / 0.258489 (0.054771) | 0.354654 / 0.293841 (0.060813) | 0.030895 / 0.128546 (-0.097651) | 0.008605 / 0.075646 (-0.067042) | 0.289190 / 0.419271 (-0.130081) | 0.052474 / 0.043533 (0.008942) | 0.316193 / 0.255139 (0.061054) | 0.339966 / 0.283200 (0.056767) | 0.024112 / 0.141683 (-0.117571) | 1.515606 / 1.452155 (0.063452) | 1.571428 / 1.492716 (0.078711) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203284 / 0.018006 (0.185278) | 0.452720 / 0.000490 (0.452230) | 0.003891 / 0.000200 (0.003691) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028992 / 0.037411 (-0.008419) | 0.083170 / 0.014526 (0.068644) | 0.097739 / 0.176557 (-0.078817) | 0.153401 / 0.737135 (-0.583734) | 0.098628 / 0.296338 (-0.197711) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390190 / 0.215209 (0.174981) | 3.901272 / 2.077655 (1.823617) | 1.887194 / 1.504120 (0.383074) | 1.723696 / 1.541195 (0.182501) | 1.800537 / 1.468490 (0.332047) | 0.481758 / 4.584777 (-4.103019) | 3.605098 / 3.745712 (-0.140614) | 3.304482 / 5.269862 (-1.965380) | 2.053515 / 4.565676 (-2.512161) | 0.056997 / 0.424275 (-0.367278) | 0.007347 / 0.007607 (-0.000260) | 0.461367 / 0.226044 (0.235323) | 4.606152 / 2.268929 (2.337223) | 2.470048 / 55.444624 (-52.974576) | 2.060019 / 6.876477 (-4.816458) | 2.320507 / 2.142072 (0.178435) | 0.575050 / 4.805227 (-4.230178) | 0.133030 / 6.500664 (-6.367634) | 0.061508 / 0.075469 (-0.013962) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.275430 / 1.841788 (-0.566357) | 19.725453 / 8.074308 (11.651145) | 14.396360 / 10.191392 (4.204968) | 0.157980 / 0.680424 (-0.522443) | 0.018516 / 0.534201 (-0.515685) | 0.394717 / 0.579283 (-0.184566) | 0.404948 / 0.434364 (-0.029415) | 0.474001 / 0.540337 (-0.066336) | 0.668463 / 1.386936 (-0.718474) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006697 / 0.011353 (-0.004656) | 0.004206 / 0.011008 (-0.006802) | 0.065458 / 0.038508 (0.026950) | 0.075845 / 0.023109 (0.052735) | 0.365051 / 0.275898 (0.089153) | 0.400919 / 0.323480 (0.077439) | 0.005347 / 0.007986 (-0.002638) | 0.003386 / 0.004328 (-0.000943) | 0.065398 / 0.004250 (0.061148) | 0.057320 / 0.037052 (0.020268) | 0.379161 / 0.258489 (0.120672) | 0.406892 / 0.293841 (0.113051) | 0.031986 / 0.128546 (-0.096560) | 0.008674 / 0.075646 (-0.066972) | 0.071723 / 0.419271 (-0.347549) | 0.049897 / 0.043533 (0.006364) | 0.372034 / 0.255139 (0.116895) | 0.394293 / 0.283200 (0.111094) | 0.023681 / 0.141683 (-0.118002) | 1.479793 / 1.452155 (0.027639) | 1.553105 / 1.492716 (0.060389) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233660 / 0.018006 (0.215654) | 0.454412 / 0.000490 (0.453923) | 0.004473 / 0.000200 (0.004273) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031115 / 0.037411 (-0.006296) | 0.090541 / 0.014526 (0.076015) | 0.104363 / 0.176557 (-0.072193) | 0.161022 / 0.737135 (-0.576114) | 0.105114 / 0.296338 (-0.191225) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427668 / 0.215209 (0.212459) | 4.263145 / 2.077655 (2.185490) | 2.247043 / 1.504120 (0.742923) | 2.082554 / 1.541195 (0.541360) | 2.170505 / 1.468490 (0.702015) | 0.491802 / 4.584777 (-4.092975) | 3.587295 / 3.745712 (-0.158417) | 3.344697 / 5.269862 (-1.925165) | 2.060529 / 4.565676 (-2.505148) | 0.057829 / 0.424275 (-0.366446) | 0.007780 / 0.007607 (0.000173) | 0.503374 / 0.226044 (0.277330) | 5.034742 / 2.268929 (2.765814) | 2.701957 / 55.444624 (-52.742667) | 2.479002 / 6.876477 (-4.397474) | 2.622055 / 2.142072 (0.479982) | 0.591363 / 4.805227 (-4.213864) | 0.133834 / 6.500664 (-6.366830) | 0.062276 / 0.075469 (-0.013193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.338788 / 1.841788 (-0.503000) | 20.333599 / 8.074308 (12.259291) | 14.783196 / 10.191392 (4.591804) | 0.168695 / 0.680424 (-0.511729) | 0.018478 / 0.534201 (-0.515723) | 0.397398 / 0.579283 (-0.181885) | 0.409900 / 0.434364 (-0.024464) | 0.475315 / 0.540337 (-0.065023) | 0.644267 / 1.386936 (-0.742669) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007315 / 0.011353 (-0.004038) | 0.004294 / 0.011008 (-0.006714) | 0.100300 / 0.038508 (0.061792) | 0.077780 / 0.023109 (0.054670) | 0.353728 / 0.275898 (0.077830) | 0.400538 / 0.323480 (0.077058) | 0.005807 / 0.007986 (-0.002178) | 0.003649 / 0.004328 (-0.000680) | 0.077548 / 0.004250 (0.073297) | 0.058834 / 0.037052 (0.021781) | 0.352064 / 0.258489 (0.093574) | 0.399951 / 0.293841 (0.106110) | 0.036472 / 0.128546 (-0.092074) | 0.008653 / 0.075646 (-0.066994) | 0.323089 / 0.419271 (-0.096182) | 0.075127 / 0.043533 (0.031594) | 0.334412 / 0.255139 (0.079273) | 0.375718 / 0.283200 (0.092519) | 0.027915 / 0.141683 (-0.113768) | 1.698795 / 1.452155 (0.246640) | 1.781447 / 1.492716 (0.288730) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216111 / 0.018006 (0.198104) | 0.507706 / 0.000490 (0.507216) | 0.000851 / 0.000200 (0.000651) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030451 / 0.037411 (-0.006960) | 0.087488 / 0.014526 (0.072962) | 0.105094 / 0.176557 (-0.071462) | 0.168130 / 0.737135 (-0.569006) | 0.106791 / 0.296338 (-0.189547) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426291 / 0.215209 (0.211082) | 4.281046 / 2.077655 (2.203391) | 2.162268 / 1.504120 (0.658148) | 1.909503 / 1.541195 (0.368309) | 1.943165 / 1.468490 (0.474675) | 0.516667 / 4.584777 (-4.068110) | 4.113218 / 3.745712 (0.367506) | 5.931372 / 5.269862 (0.661510) | 3.563521 / 4.565676 (-1.002155) | 0.062415 / 0.424275 (-0.361860) | 0.007577 / 0.007607 (-0.000030) | 0.534588 / 0.226044 (0.308543) | 5.183490 / 2.268929 (2.914561) | 2.790662 / 55.444624 (-52.653962) | 2.258630 / 6.876477 (-4.617846) | 2.499930 / 2.142072 (0.357857) | 0.606154 / 4.805227 (-4.199073) | 0.136093 / 6.500664 (-6.364571) | 0.061151 / 0.075469 (-0.014318) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.398392 / 1.841788 (-0.443396) | 21.482150 / 8.074308 (13.407842) | 15.477336 / 10.191392 (5.285944) | 0.192878 / 0.680424 (-0.487546) | 0.021764 / 0.534201 (-0.512437) | 0.437149 / 0.579283 (-0.142134) | 0.439976 / 0.434364 (0.005612) | 0.514498 / 0.540337 (-0.025840) | 0.762642 / 1.386936 (-0.624294) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007504 / 0.011353 (-0.003849) | 0.004526 / 0.011008 (-0.006482) | 0.071008 / 0.038508 (0.032500) | 0.078305 / 0.023109 (0.055195) | 0.436160 / 0.275898 (0.160262) | 0.439048 / 0.323480 (0.115568) | 0.006061 / 0.007986 (-0.001925) | 0.003681 / 0.004328 (-0.000648) | 0.069445 / 0.004250 (0.065195) | 0.059258 / 0.037052 (0.022206) | 0.437745 / 0.258489 (0.179256) | 0.464247 / 0.293841 (0.170406) | 0.033286 / 0.128546 (-0.095260) | 0.009846 / 0.075646 (-0.065800) | 0.076330 / 0.419271 (-0.342941) | 0.051919 / 0.043533 (0.008386) | 0.432817 / 0.255139 (0.177678) | 0.426295 / 0.283200 (0.143095) | 0.029818 / 0.141683 (-0.111865) | 1.747640 / 1.452155 (0.295485) | 1.726653 / 1.492716 (0.233937) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.251253 / 0.018006 (0.233247) | 0.483394 / 0.000490 (0.482904) | 0.003992 / 0.000200 (0.003793) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032180 / 0.037411 (-0.005231) | 0.095425 / 0.014526 (0.080900) | 0.105908 / 0.176557 (-0.070648) | 0.164732 / 0.737135 (-0.572403) | 0.115903 / 0.296338 (-0.180435) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.469467 / 0.215209 (0.254258) | 4.633239 / 2.077655 (2.555584) | 2.517557 / 1.504120 (1.013437) | 2.352726 / 1.541195 (0.811531) | 2.314618 / 1.468490 (0.846128) | 0.548446 / 4.584777 (-4.036331) | 3.908797 / 3.745712 (0.163085) | 3.525941 / 5.269862 (-1.743921) | 2.178858 / 4.565676 (-2.386819) | 0.057614 / 0.424275 (-0.366661) | 0.008604 / 0.007607 (0.000997) | 0.554756 / 0.226044 (0.328711) | 5.325635 / 2.268929 (3.056706) | 3.014266 / 55.444624 (-52.430359) | 2.844165 / 6.876477 (-4.032312) | 2.903019 / 2.142072 (0.760947) | 0.617750 / 4.805227 (-4.187478) | 0.144259 / 6.500664 (-6.356405) | 0.065944 / 0.075469 (-0.009525) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.504625 / 1.841788 (-0.337163) | 22.400787 / 8.074308 (14.326479) | 15.223702 / 10.191392 (5.032310) | 0.213357 / 0.680424 (-0.467067) | 0.019310 / 0.534201 (-0.514891) | 0.456596 / 0.579283 (-0.122687) | 0.473811 / 0.434364 (0.039447) | 0.517800 / 0.540337 (-0.022537) | 0.792468 / 1.386936 (-0.594468) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007420 / 0.011353 (-0.003933) | 0.004502 / 0.011008 (-0.006506) | 0.097882 / 0.038508 (0.059374) | 0.079084 / 0.023109 (0.055975) | 0.361797 / 0.275898 (0.085899) | 0.416563 / 0.323480 (0.093083) | 0.006106 / 0.007986 (-0.001879) | 0.003803 / 0.004328 (-0.000526) | 0.074669 / 0.004250 (0.070418) | 0.062168 / 0.037052 (0.025116) | 0.378844 / 0.258489 (0.120355) | 0.426601 / 0.293841 (0.132760) | 0.035619 / 0.128546 (-0.092927) | 0.009686 / 0.075646 (-0.065960) | 0.336481 / 0.419271 (-0.082790) | 0.065553 / 0.043533 (0.022021) | 0.362501 / 0.255139 (0.107362) | 0.399752 / 0.283200 (0.116552) | 0.028685 / 0.141683 (-0.112998) | 1.683495 / 1.452155 (0.231340) | 1.786105 / 1.492716 (0.293388) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220792 / 0.018006 (0.202786) | 0.501936 / 0.000490 (0.501447) | 0.000389 / 0.000200 (0.000189) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032180 / 0.037411 (-0.005232) | 0.093079 / 0.014526 (0.078553) | 0.107967 / 0.176557 (-0.068589) | 0.171747 / 0.737135 (-0.565389) | 0.107920 / 0.296338 (-0.188418) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444431 / 0.215209 (0.229222) | 4.454934 / 2.077655 (2.377279) | 2.140265 / 1.504120 (0.636145) | 1.960126 / 1.541195 (0.418931) | 2.049649 / 1.468490 (0.581158) | 0.557861 / 4.584777 (-4.026916) | 4.046240 / 3.745712 (0.300528) | 4.513748 / 5.269862 (-0.756114) | 2.593643 / 4.565676 (-1.972034) | 0.066795 / 0.424275 (-0.357480) | 0.008302 / 0.007607 (0.000694) | 0.535643 / 0.226044 (0.309599) | 5.299429 / 2.268929 (3.030500) | 2.656019 / 55.444624 (-52.788606) | 2.281214 / 6.876477 (-4.595263) | 2.302910 / 2.142072 (0.160837) | 0.661696 / 4.805227 (-4.143532) | 0.149787 / 6.500664 (-6.350877) | 0.069609 / 0.075469 (-0.005860) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.509842 / 1.841788 (-0.331946) | 21.717504 / 8.074308 (13.643196) | 15.825102 / 10.191392 (5.633710) | 0.168115 / 0.680424 (-0.512309) | 0.021637 / 0.534201 (-0.512564) | 0.454270 / 0.579283 (-0.125013) | 0.458531 / 0.434364 (0.024167) | 0.523052 / 0.540337 (-0.017285) | 0.711219 / 1.386936 (-0.675717) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007189 / 0.011353 (-0.004164) | 0.004437 / 0.011008 (-0.006571) | 0.075111 / 0.038508 (0.036603) | 0.079245 / 0.023109 (0.056136) | 0.423169 / 0.275898 (0.147270) | 0.455007 / 0.323480 (0.131527) | 0.006076 / 0.007986 (-0.001909) | 0.003819 / 0.004328 (-0.000509) | 0.074976 / 0.004250 (0.070726) | 0.062127 / 0.037052 (0.025075) | 0.456809 / 0.258489 (0.198320) | 0.474707 / 0.293841 (0.180867) | 0.036221 / 0.128546 (-0.092325) | 0.009428 / 0.075646 (-0.066218) | 0.082842 / 0.419271 (-0.336429) | 0.057086 / 0.043533 (0.013553) | 0.436121 / 0.255139 (0.180982) | 0.453934 / 0.283200 (0.170734) | 0.026045 / 0.141683 (-0.115638) | 1.789782 / 1.452155 (0.337627) | 1.820934 / 1.492716 (0.328218) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230790 / 0.018006 (0.212784) | 0.497987 / 0.000490 (0.497497) | 0.002775 / 0.000200 (0.002575) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034418 / 0.037411 (-0.002994) | 0.105567 / 0.014526 (0.091041) | 0.113134 / 0.176557 (-0.063423) | 0.173742 / 0.737135 (-0.563394) | 0.115936 / 0.296338 (-0.180403) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.502259 / 0.215209 (0.287050) | 4.969877 / 2.077655 (2.892222) | 2.684860 / 1.504120 (1.180740) | 2.484386 / 1.541195 (0.943192) | 2.543061 / 1.468490 (1.074571) | 0.545733 / 4.584777 (-4.039044) | 4.029660 / 3.745712 (0.283948) | 5.927883 / 5.269862 (0.658021) | 3.528372 / 4.565676 (-1.037305) | 0.065957 / 0.424275 (-0.358318) | 0.008933 / 0.007607 (0.001326) | 0.601630 / 0.226044 (0.375585) | 5.825872 / 2.268929 (3.556944) | 3.230721 / 55.444624 (-52.213904) | 2.891308 / 6.876477 (-3.985169) | 3.054994 / 2.142072 (0.912922) | 0.665480 / 4.805227 (-4.139747) | 0.154815 / 6.500664 (-6.345849) | 0.072997 / 0.075469 (-0.002472) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.549892 / 1.841788 (-0.291896) | 22.337484 / 8.074308 (14.263176) | 16.308286 / 10.191392 (6.116894) | 0.189594 / 0.680424 (-0.490830) | 0.021844 / 0.534201 (-0.512357) | 0.456958 / 0.579283 (-0.122325) | 0.459957 / 0.434364 (0.025593) | 0.529014 / 0.540337 (-0.011323) | 0.700359 / 1.386936 (-0.686577) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009050 / 0.011353 (-0.002303) | 0.004968 / 0.011008 (-0.006040) | 0.114315 / 0.038508 (0.075807) | 0.084475 / 0.023109 (0.061366) | 0.426325 / 0.275898 (0.150427) | 0.457870 / 0.323480 (0.134390) | 0.007076 / 0.007986 (-0.000910) | 0.004635 / 0.004328 (0.000307) | 0.082950 / 0.004250 (0.078700) | 0.065414 / 0.037052 (0.028361) | 0.441936 / 0.258489 (0.183447) | 0.476983 / 0.293841 (0.183142) | 0.048575 / 0.128546 (-0.079972) | 0.013929 / 0.075646 (-0.061717) | 0.377498 / 0.419271 (-0.041774) | 0.081503 / 0.043533 (0.037970) | 0.426706 / 0.255139 (0.171567) | 0.460374 / 0.283200 (0.177175) | 0.046052 / 0.141683 (-0.095631) | 1.894896 / 1.452155 (0.442741) | 1.998639 / 1.492716 (0.505923) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313267 / 0.018006 (0.295261) | 0.607501 / 0.000490 (0.607012) | 0.003369 / 0.000200 (0.003169) | 0.000102 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032266 / 0.037411 (-0.005145) | 0.120138 / 0.014526 (0.105613) | 0.115044 / 0.176557 (-0.061513) | 0.181374 / 0.737135 (-0.555761) | 0.114681 / 0.296338 (-0.181657) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.648039 / 0.215209 (0.432830) | 6.005048 / 2.077655 (3.927394) | 2.674524 / 1.504120 (1.170404) | 2.284831 / 1.541195 (0.743637) | 2.360150 / 1.468490 (0.891660) | 0.888021 / 4.584777 (-3.696756) | 5.419840 / 3.745712 (1.674128) | 4.825816 / 5.269862 (-0.444046) | 3.140876 / 4.565676 (-1.424801) | 0.099511 / 0.424275 (-0.324764) | 0.009176 / 0.007607 (0.001569) | 0.735646 / 0.226044 (0.509602) | 7.224026 / 2.268929 (4.955097) | 3.551146 / 55.444624 (-51.893478) | 2.844374 / 6.876477 (-4.032103) | 3.145307 / 2.142072 (1.003235) | 1.077636 / 4.805227 (-3.727591) | 0.217754 / 6.500664 (-6.282910) | 0.081755 / 0.075469 (0.006286) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.670956 / 1.841788 (-0.170831) | 25.524961 / 8.074308 (17.450653) | 23.061596 / 10.191392 (12.870204) | 0.247524 / 0.680424 (-0.432899) | 0.031712 / 0.534201 (-0.502489) | 0.513049 / 0.579283 (-0.066234) | 0.614568 / 0.434364 (0.180204) | 0.574669 / 0.540337 (0.034331) | 0.816621 / 1.386936 (-0.570315) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009384 / 0.011353 (-0.001969) | 0.004959 / 0.011008 (-0.006049) | 0.084782 / 0.038508 (0.046274) | 0.098086 / 0.023109 (0.074977) | 0.544395 / 0.275898 (0.268497) | 0.585157 / 0.323480 (0.261677) | 0.006507 / 0.007986 (-0.001479) | 0.004151 / 0.004328 (-0.000178) | 0.088596 / 0.004250 (0.084345) | 0.069149 / 0.037052 (0.032097) | 0.533109 / 0.258489 (0.274620) | 0.604117 / 0.293841 (0.310276) | 0.047685 / 0.128546 (-0.080861) | 0.013651 / 0.075646 (-0.061996) | 0.096566 / 0.419271 (-0.322705) | 0.062022 / 0.043533 (0.018489) | 0.561897 / 0.255139 (0.306758) | 0.617636 / 0.283200 (0.334436) | 0.034636 / 0.141683 (-0.107047) | 1.854667 / 1.452155 (0.402512) | 1.908923 / 1.492716 (0.416207) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260633 / 0.018006 (0.242627) | 0.622268 / 0.000490 (0.621778) | 0.002116 / 0.000200 (0.001916) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035161 / 0.037411 (-0.002250) | 0.103707 / 0.014526 (0.089181) | 0.115467 / 0.176557 (-0.061090) | 0.180077 / 0.737135 (-0.557059) | 0.118871 / 0.296338 (-0.177467) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.628481 / 0.215209 (0.413271) | 6.304929 / 2.077655 (4.227275) | 3.027775 / 1.504120 (1.523655) | 2.753880 / 1.541195 (1.212686) | 2.820442 / 1.468490 (1.351952) | 0.851103 / 4.584777 (-3.733674) | 5.427383 / 3.745712 (1.681670) | 7.434310 / 5.269862 (2.164449) | 4.418790 / 4.565676 (-0.146887) | 0.101733 / 0.424275 (-0.322542) | 0.009701 / 0.007607 (0.002094) | 0.763033 / 0.226044 (0.536989) | 7.497927 / 2.268929 (5.228998) | 3.735335 / 55.444624 (-51.709290) | 3.149200 / 6.876477 (-3.727277) | 3.306214 / 2.142072 (1.164141) | 1.085440 / 4.805227 (-3.719787) | 0.207562 / 6.500664 (-6.293102) | 0.078091 / 0.075469 (0.002622) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.820097 / 1.841788 (-0.021691) | 25.525539 / 8.074308 (17.451231) | 21.874219 / 10.191392 (11.682827) | 0.228391 / 0.680424 (-0.452033) | 0.029584 / 0.534201 (-0.504617) | 0.511546 / 0.579283 (-0.067737) | 0.602719 / 0.434364 (0.168355) | 0.581874 / 0.540337 (0.041537) | 0.802861 / 1.386936 (-0.584075) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/850 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/850/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/850/comments | https://api.github.com/repos/huggingface/datasets/issues/850/events | https://github.com/huggingface/datasets/pull/850 | 742,369,419 | MDExOlB1bGxSZXF1ZXN0NTIwNTE0MDY3 | 850 | Create ClassLabel for labelling tasks datasets | [] | closed | false | null | 1 | 2020-11-13T11:07:22Z | 2020-11-16T10:32:05Z | 2020-11-16T10:31:58Z | null | This PR adds a specific `ClassLabel` for the datasets that are about a labelling task such as POS, NER or Chunking. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/850/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/850/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/850.diff",
"html_url": "https://github.com/huggingface/datasets/pull/850",
"merged_at": "2020-11-16T10:31:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/850.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/850"
} | true | [
"@lhoestq Better?"
] |
https://api.github.com/repos/huggingface/datasets/issues/1337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1337/comments | https://api.github.com/repos/huggingface/datasets/issues/1337/events | https://github.com/huggingface/datasets/pull/1337 | 759,710,482 | MDExOlB1bGxSZXF1ZXN0NTM0NjY3NDUz | 1,337 | Add spanish billion words | [] | closed | false | null | 1 | 2020-12-08T19:18:02Z | 2020-12-08T22:59:38Z | 2020-12-08T21:15:27Z | null | Add an unannotated corpus of the Spanish language of nearly 1.5 billion words, compiled from different resources from the web.
The dataset needs 10 GB (download: 1.89 GiB, generated: 8.34 GiB, post-processed: Unknown size, total: 10.22 GiB), the test using dummy data pass but my laptop isn't able to run it on the real data (I left it running for over 8 hours and it didn't finish). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1337/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1337/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/1337.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1337",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1337.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1337"
} | true | [
"The tests failed because of ```RemoteDatasetTest``` so I tried ```git rebase``` and messed everything up. I've made a new clean PR (#1347)."
] |
https://api.github.com/repos/huggingface/datasets/issues/5471 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5471/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5471/comments | https://api.github.com/repos/huggingface/datasets/issues/5471/events | https://github.com/huggingface/datasets/pull/5471 | 1,558,557,545 | PR_kwDODunzps5InPA7 | 5,471 | Add num_test_batches option | [] | closed | false | null | 4 | 2023-01-26T18:09:40Z | 2023-01-27T18:16:45Z | 2023-01-27T18:08:36Z | null | `to_tf_dataset` calls can be very costly because of the number of test batches drawn during `_get_output_signature`. The test batches are draw in order to estimate the shapes when creating the tensorflow dataset. This is necessary when the shapes can be irregular, but not in cases when the tensor shapes are the same across all samples. This PR adds an option to change the number of batches drawn, so the user can speed this conversion up.
Running the following, and modifying `num_test_batches`
```
import time
from datasets import load_dataset
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator()
dataset = load_dataset("beans")
dataset = dataset["train"].with_format("np")
start = time.time()
dataset = dataset.to_tf_dataset(
columns=["image"],
label_cols=["label"],
batch_size=8,
collate_fn=data_collator,
num_test_batches=NUM_TEST_BATCHES,
)
end = time.time()
print(end - start)
```
NUM_TEST_BATCHES=200: 0.8197s
NUM_TEST_BATCHES=50: 0.3070s
NUM_TEST_BATCHES=2: 0.1417s
NUM_TEST_BATCHES=1: 0.1352s | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5471/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5471/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5471.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5471",
"merged_at": "2023-01-27T18:08:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5471.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5471"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I thought this issue was resolved in my parallel `to_tf_dataset` PR! I changed the default `num_test_batches` in `_get_output_signature` to 20 and used a test batch size of 1 to maximize variance to detect shorter samples. I think it's still okay to have this PR, though - but I'd use the new value of 20 as the default!",
"@Rocketknight1 You're right - I didn't have the most recent changes to the default values. Updated now to 20! I still think it would be good to have it configurable from the `to_tf_dataset` call so the user has the option to either make it more robust if many samples are needed, or faster if only one is needed. That, and I selfishly want it for faster tests. ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010441 / 0.011353 (-0.000912) | 0.005605 / 0.011008 (-0.005404) | 0.115712 / 0.038508 (0.077204) | 0.040907 / 0.023109 (0.017797) | 0.357673 / 0.275898 (0.081775) | 0.415427 / 0.323480 (0.091947) | 0.008827 / 0.007986 (0.000842) | 0.006069 / 0.004328 (0.001740) | 0.088985 / 0.004250 (0.084735) | 0.048461 / 0.037052 (0.011409) | 0.362065 / 0.258489 (0.103576) | 0.393643 / 0.293841 (0.099802) | 0.043844 / 0.128546 (-0.084703) | 0.013757 / 0.075646 (-0.061889) | 0.390993 / 0.419271 (-0.028278) | 0.053612 / 0.043533 (0.010079) | 0.348688 / 0.255139 (0.093549) | 0.377818 / 0.283200 (0.094619) | 0.115762 / 0.141683 (-0.025920) | 1.751826 / 1.452155 (0.299672) | 1.773326 / 1.492716 (0.280609) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220668 / 0.018006 (0.202662) | 0.536830 / 0.000490 (0.536340) | 0.000467 / 0.000200 (0.000267) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031500 / 0.037411 (-0.005911) | 0.125796 / 0.014526 (0.111270) | 0.137539 / 0.176557 (-0.039017) | 0.184651 / 0.737135 (-0.552484) | 0.145707 / 0.296338 (-0.150632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.465876 / 0.215209 (0.250667) | 4.637711 / 2.077655 (2.560056) | 2.132335 / 1.504120 (0.628215) | 1.862593 / 1.541195 (0.321398) | 1.961701 / 1.468490 (0.493211) | 0.800551 / 4.584777 (-3.784226) | 4.453321 / 3.745712 (0.707608) | 4.291030 / 5.269862 (-0.978832) | 2.256685 / 4.565676 (-2.308991) | 0.097787 / 0.424275 (-0.326488) | 0.014116 / 0.007607 (0.006509) | 0.593395 / 0.226044 (0.367351) | 5.885774 / 2.268929 (3.616845) | 2.666224 / 55.444624 (-52.778400) | 2.276673 / 6.876477 (-4.599803) | 2.358190 / 2.142072 (0.216117) | 0.981398 / 4.805227 (-3.823829) | 0.196997 / 6.500664 (-6.303668) | 0.077020 / 0.075469 (0.001550) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.365646 / 1.841788 (-0.476142) | 17.418157 / 8.074308 (9.343849) | 15.838749 / 10.191392 (5.647357) | 0.172749 / 0.680424 (-0.507675) | 0.033711 / 0.534201 (-0.500490) | 0.513306 / 0.579283 (-0.065978) | 0.503201 / 0.434364 (0.068837) | 0.608954 / 0.540337 (0.068616) | 0.734697 / 1.386936 (-0.652239) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008749 / 0.011353 (-0.002604) | 0.005738 / 0.011008 (-0.005270) | 0.084946 / 0.038508 (0.046438) | 0.040386 / 0.023109 (0.017277) | 0.398698 / 0.275898 (0.122800) | 0.435843 / 0.323480 (0.112363) | 0.006812 / 0.007986 (-0.001174) | 0.004567 / 0.004328 (0.000239) | 0.085857 / 0.004250 (0.081607) | 0.054791 / 0.037052 (0.017738) | 0.400381 / 0.258489 (0.141892) | 0.460313 / 0.293841 (0.166472) | 0.042299 / 0.128546 (-0.086247) | 0.014128 / 0.075646 (-0.061519) | 0.100497 / 0.419271 (-0.318775) | 0.058356 / 0.043533 (0.014823) | 0.399774 / 0.255139 (0.144635) | 0.428210 / 0.283200 (0.145011) | 0.122084 / 0.141683 (-0.019598) | 1.683519 / 1.452155 (0.231365) | 1.798024 / 1.492716 (0.305307) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255058 / 0.018006 (0.237051) | 0.488831 / 0.000490 (0.488342) | 0.008349 / 0.000200 (0.008149) | 0.000183 / 0.000054 (0.000129) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034870 / 0.037411 (-0.002541) | 0.131818 / 0.014526 (0.117292) | 0.143607 / 0.176557 (-0.032949) | 0.197413 / 0.737135 (-0.539722) | 0.148970 / 0.296338 (-0.147368) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492831 / 0.215209 (0.277622) | 4.963085 / 2.077655 (2.885430) | 2.367803 / 1.504120 (0.863683) | 2.145535 / 1.541195 (0.604340) | 2.289452 / 1.468490 (0.820962) | 0.812691 / 4.584777 (-3.772086) | 4.554068 / 3.745712 (0.808356) | 2.377126 / 5.269862 (-2.892735) | 1.537243 / 4.565676 (-3.028433) | 0.099742 / 0.424275 (-0.324534) | 0.014757 / 0.007607 (0.007149) | 0.628714 / 0.226044 (0.402670) | 6.240197 / 2.268929 (3.971268) | 2.961929 / 55.444624 (-52.482696) | 2.533436 / 6.876477 (-4.343040) | 2.642619 / 2.142072 (0.500547) | 0.976002 / 4.805227 (-3.829225) | 0.197912 / 6.500664 (-6.302752) | 0.078767 / 0.075469 (0.003297) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.522863 / 1.841788 (-0.318925) | 18.210504 / 8.074308 (10.136196) | 15.664172 / 10.191392 (5.472780) | 0.178510 / 0.680424 (-0.501914) | 0.020852 / 0.534201 (-0.513349) | 0.501757 / 0.579283 (-0.077526) | 0.496542 / 0.434364 (0.062178) | 0.624958 / 0.540337 (0.084620) | 0.746960 / 1.386936 (-0.639976) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2126/comments | https://api.github.com/repos/huggingface/datasets/issues/2126/events | https://github.com/huggingface/datasets/pull/2126 | 842,779,966 | MDExOlB1bGxSZXF1ZXN0NjAyMjcyMjg4 | 2,126 | Replace legacy torch.Tensor constructor with torch.tensor | [] | closed | false | null | 0 | 2021-03-28T16:57:30Z | 2021-03-29T09:27:14Z | 2021-03-29T09:27:13Z | null | The title says it all (motivated by [this issue](https://github.com/pytorch/pytorch/issues/53146) in the pytorch repo). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2126/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2126/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2126.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2126",
"merged_at": "2021-03-29T09:27:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2126.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2126"
} | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.