url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 758M
1.95B
| node_id
stringlengths 18
32
| number
int64 1.2k
6.31k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
3
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
sequencelengths 0
30
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
36.2k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2007 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2007/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2007/comments | https://api.github.com/repos/huggingface/datasets/issues/2007/events | https://github.com/huggingface/datasets/issues/2007 | 824,518,158 | MDU6SXNzdWU4MjQ1MTgxNTg= | 2,007 | How to not load huggingface datasets into memory | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | null | [] | null | [
"So maybe a summary here: \r\nIf I could fit a large model with batch_size = X into memory, is there a way I could train this model for huge datasets with keeping setting the same? thanks ",
"The `datastets` library doesn't load datasets into memory. Therefore you can load a dataset that is terabytes big without filling up your RAM.\r\n\r\nThe only thing that's loaded into memory during training is the batch used in the training step.\r\nSo as long as your model works with batch_size = X, then you can load an even bigger dataset and it will work as well with the same batch_size.\r\n\r\nNote that you still have to take into account that some batches take more memory than others, depending on the texts lengths. If it works for a batch with batch_size = X and with texts of maximum length, then it will work for all batches.\r\n\r\nIn your case I guess that there are a few long sentences in the dataset. For those long sentences you get a memory error on your GPU because they're too long. By passing `max_train_samples` you may have taken a subset of the dataset that only contain short sentences. That's probably why in your case it worked only when you set `max_train_samples`.\r\nI'd suggest you to reduce the batch size so that the batches with long sentences can be loaded on the GPU.\r\n\r\nLet me know if that helps or if you have other questions"
] | "2021-03-08T12:35:26Z" | "2021-08-04T18:02:25Z" | "2021-08-04T18:02:25Z" | NONE | null | null | null | Hi
I am running this example from transformers library version 4.3.3:
(Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box)
USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_config_name ro-en --source_prefix "translate English to Romanian: " --task translation_en_to_ro --output_dir /test/test_large --do_train --do_eval --predict_with_generate --max_train_samples 500 --max_val_samples 500 --max_source_length 128 --max_target_length 128 --sortish_sampler --per_device_train_batch_size 8 --val_max_target_length 128 --deepspeed ds_config.json --num_train_epochs 1 --eval_steps 25000 --warmup_steps 500 --overwrite_output_dir
(Here please find the script: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py)
If you do not pass max_train_samples in above command to load the full dataset, then I get memory issue on a gpu with 24 GigBytes of memory.
I need to train large-scale mt5 model on large-scale datasets of wikipedia (multiple of them concatenated or other datasets in multiple languages like OPUS), could you help me how I can avoid loading the full data into memory? to make the scripts not related to data size?
In above example, I was hoping the script could work without relying on dataset size, so I can still train the model without subsampling training set.
thank you so much @lhoestq for your great help in advance
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2007/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2007/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2015/comments | https://api.github.com/repos/huggingface/datasets/issues/2015/events | https://github.com/huggingface/datasets/pull/2015 | 825,942,108 | MDExOlB1bGxSZXF1ZXN0NTg3OTg4NTQ0 | 2,015 | Fix ipython function creation in tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-03-09T13:36:59Z" | "2021-03-09T14:06:04Z" | "2021-03-09T14:06:03Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2015.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2015",
"merged_at": "2021-03-09T14:06:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2015.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2015"
} | The test at `tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function` was failing in python 3.8 because the ipython function was not properly created.
Fix #2010 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2015/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2015/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2092/comments | https://api.github.com/repos/huggingface/datasets/issues/2092/events | https://github.com/huggingface/datasets/issues/2092 | 836,984,043 | MDU6SXNzdWU4MzY5ODQwNDM= | 2,092 | How to disable making arrow tables in load_dataset ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/48825663?v=4",
"events_url": "https://api.github.com/users/Jeevesh8/events{/privacy}",
"followers_url": "https://api.github.com/users/Jeevesh8/followers",
"following_url": "https://api.github.com/users/Jeevesh8/following{/other_user}",
"gists_url": "https://api.github.com/users/Jeevesh8/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Jeevesh8",
"id": 48825663,
"login": "Jeevesh8",
"node_id": "MDQ6VXNlcjQ4ODI1NjYz",
"organizations_url": "https://api.github.com/users/Jeevesh8/orgs",
"received_events_url": "https://api.github.com/users/Jeevesh8/received_events",
"repos_url": "https://api.github.com/users/Jeevesh8/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Jeevesh8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jeevesh8/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Jeevesh8"
} | [] | closed | false | null | [] | null | [
"Hi ! We plan to add streaming features in the future.\r\n\r\nThis should allow to load a dataset instantaneously without generating the arrow table. The trade-off is that accessing examples from a streaming dataset must be done in an iterative way, and with an additional (but hopefully minor) overhead.\r\nWhat do you think about this ?\r\n\r\nIf you have ideas or suggestions of what you expect from such features as a user, feel free to share them, this is really valuable to us !",
"People mainly want this feature either because it takes too much time too make arrow tables, or they occupy too much memory on the disk. I think both the problem can be solved if we provide arrow tables themselves on datasets hub. Can we do this currently @lhoestq ? \r\n",
"@lhoestq I think the ```try_from_hf_gcs``` provide the same functionality. What all datasets are available on HF GCS? Are all the datasets on huggingFace datasets hub are made available on GCS, automatically?",
"Only datasets like wikipedia, wiki40b, wiki_dpr and natural questions are available already processed on the HF google storage. This is used to download directly the arrow file instead of building it from the original data files.",
"@lhoestq How can we make sure that the data we upload on HuggingFace hub is available in form of preprocessed arrow files ?",
"We're still working on this :) This will be available soon\r\nUsers will be able to put their processed arrow files on the Hub",
"Hi! You can now use `Dataset.push_to_hub` to store preprocessed files on the Hub.\r\n\r\nAnd to avoid downloading preprocessed files, you can use streaming by setting `streaming=True` in `load_dataset`."
] | "2021-03-21T04:50:07Z" | "2022-06-01T16:49:52Z" | "2022-06-01T16:49:52Z" | NONE | null | null | null | Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2092/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2092/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5081 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5081/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5081/comments | https://api.github.com/repos/huggingface/datasets/issues/5081/events | https://github.com/huggingface/datasets/issues/5081 | 1,399,340,050 | I_kwDODunzps5TaDwS | 5,081 | Bug loading `sentence-transformers/parallel-sentences` | {
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PhilipMay",
"id": 229382,
"login": "PhilipMay",
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PhilipMay"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"tagging @nreimers ",
"The dataset is sadly not really compatible to be loaded with `load_dataset`. So far it is better to git clone it and to use the files directly.\r\n\r\nA data loading script would be needed to be added to this dataset. But this was too much overhead / not really intuitive how to create it.",
"Since the dataset is a bunch of TSVs we should not need a dataset script I think.\r\n\r\nBy default it tries to load all the TSVs at once, which fails here because they don't all have the same columns (pd.read_csv uses the first line as header by default). But those files have no header ! So, to properly load any TSV file in this repo, one has to pass `names=[...]` for pd.read_csv to know which column names to use.\r\n\r\nTo fix this situation, we can either do\r\n1. replace the TSVs by TSV with column names\r\n2. OR specify the pd.read_csv kwargs as YAML in the dataset card - and `datasets` would use that by default\r\n\r\nWDTY ?",
"There are more issues in the dataset.\r\nTo load OpenSubtitles I have to provide this (see `skiprows`):\r\n\r\n```python\r\ndf_os = pd.read_csv(\r\n \"./parallel-sentences/OpenSubtitles/OpenSubtitles-en-de-train.tsv.gz\", \r\n sep=\"\\t\", \r\n quoting=csv.QUOTE_NONE,\r\n header=None,\r\n names=[\"en\", \"de\"],\r\n skiprows=[540344, 9151700, 10040173, 10040199, 11314673, 11338258, 11869223, 12159297, 12251078, 12303334],\r\n)\r\n```",
"What's wrong with those lines exactly ?\r\nMaybe passing `error_bad_lines=False` (and maybe `warn_bad_lines=True`) can be helpful",
"> What's wrong with those lines exactly ? \r\n\r\nStuff like this: `ParserError: Error tokenizing data. C error: Expected 2 fields in line 540345, saw 3`\r\n\r\n",
"> Maybe passing error_bad_lines=False (and maybe warn_bad_lines=True) can be helpful\r\n\r\nYes. That would hide the issue but not solve it.",
"@nreimers WDYT about the two options mentioned above ?"
] | "2022-10-06T10:47:51Z" | "2022-10-11T10:00:48Z" | null | CONTRIBUTOR | null | null | null | ## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("sentence-transformers/parallel-sentences")
```
raises this:
```
/home/phmay/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py:697: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols'
return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs)
/home/phmay/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py:697: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols'
return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [4], line 1
----> 1 dataset = load_dataset("sentence-transformers/parallel-sentences", split="train")
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/load.py:1693, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1690 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1692 # Download and prepare data
-> 1693 builder_instance.download_and_prepare(
1694 download_config=download_config,
1695 download_mode=download_mode,
1696 ignore_verifications=ignore_verifications,
1697 try_from_hf_gcs=try_from_hf_gcs,
1698 use_auth_token=use_auth_token,
1699 )
1701 # Build dataset for splits
1702 keep_in_memory = (
1703 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1704 )
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:807, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs)
801 if not downloaded_from_gcs:
802 prepare_split_kwargs = {
803 "file_format": file_format,
804 "max_shard_size": max_shard_size,
805 **download_and_prepare_kwargs,
806 }
--> 807 self._download_and_prepare(
808 dl_manager=dl_manager,
809 verify_infos=verify_infos,
810 **prepare_split_kwargs,
811 **download_and_prepare_kwargs,
812 )
813 # Sync info
814 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:898, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
894 split_dict.add(split_generator.split_info)
896 try:
897 # Prepare split will record examples associated to the split
--> 898 self._prepare_split(split_generator, **prepare_split_kwargs)
899 except OSError as e:
900 raise OSError(
901 "Cannot find data file. "
902 + (self.manual_download_instructions or "")
903 + "\nOriginal error:\n"
904 + str(e)
905 ) from None
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:1513, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, max_shard_size)
1506 shard_id += 1
1507 writer = writer_class(
1508 features=writer._features,
1509 path=fpath.replace("SSSSS", f"{shard_id:05d}"),
1510 storage_options=self._fs.storage_options,
1511 embed_local_files=embed_local_files,
1512 )
-> 1513 writer.write_table(table)
1514 finally:
1515 num_shards = shard_id + 1
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/arrow_writer.py:540, in ArrowWriter.write_table(self, pa_table, writer_batch_size)
538 if self.pa_writer is None:
539 self._build_writer(inferred_schema=pa_table.schema)
--> 540 pa_table = table_cast(pa_table, self._schema)
541 if self.embed_local_files:
542 pa_table = embed_table_storage(pa_table)
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/table.py:2044, in table_cast(table, schema)
2032 """Improved version of pa.Table.cast.
2033
2034 It supports casting to feature types stored in the schema metadata.
(...)
2041 table (:obj:`pyarrow.Table`): the casted table
2042 """
2043 if table.schema != schema:
-> 2044 return cast_table_to_schema(table, schema)
2045 elif table.schema.metadata != schema.metadata:
2046 return table.replace_schema_metadata(schema.metadata)
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/table.py:2005, in cast_table_to_schema(table, schema)
2003 features = Features.from_arrow_schema(schema)
2004 if sorted(table.column_names) != sorted(features):
-> 2005 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
2006 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
2007 return pa.Table.from_arrays(arrays, schema=schema)
ValueError: Couldn't cast
Action taken on Parliament's resolutions: see Minutes: string
Následný postup na základě usnesení Parlamentu: viz zápis: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 742
to
{'Membership of Parliament: see Minutes': Value(dtype='string', id=None), 'Състав на Парламента: вж. протоколи': Value(dtype='string', id=None)}
because column names don't match
```
## Expected results
no error
## Actual results
error
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux
- Python version: Python 3.9.13
- PyArrow version: pyarrow 9.0.0
- transformers 4.22.2
- datasets 2.5.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5081/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5081/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4846/comments | https://api.github.com/repos/huggingface/datasets/issues/4846/events | https://github.com/huggingface/datasets/pull/4846 | 1,337,979,897 | PR_kwDODunzps49IYSC | 4,846 | Update documentation card of miam dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4",
"events_url": "https://api.github.com/users/PierreColombo/events{/privacy}",
"followers_url": "https://api.github.com/users/PierreColombo/followers",
"following_url": "https://api.github.com/users/PierreColombo/following{/other_user}",
"gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PierreColombo",
"id": 22492839,
"login": "PierreColombo",
"node_id": "MDQ6VXNlcjIyNDkyODM5",
"organizations_url": "https://api.github.com/users/PierreColombo/orgs",
"received_events_url": "https://api.github.com/users/PierreColombo/received_events",
"repos_url": "https://api.github.com/users/PierreColombo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PierreColombo"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Ahahah :D not sur how i broke something by updating the README :D ",
"Thanks for the fix @PierreColombo. \r\n\r\nOnce a README is modified, our CI runs tests on it, requiring additional quality fixes, so that all READMEs are progressively improved and have some minimal tags/sections/information.\r\n\r\nFor this specific README file, the additional quality requirements of the CI are: https://github.com/huggingface/datasets/runs/7819924428?check_suite_focus=true\r\n```\r\nE The following issues were found for the README at `/home/runner/work/datasets/datasets/datasets/miam/README.md`:\r\nE -\tSection `Additional Information` is missing subsection: `Dataset Curators`.\r\nE -\tSection `Additional Information` is missing subsection: `Contributions`.\r\nE -\t`Additional Information` has an extra subsection: `Benchmark Curators`. Skipping further validation checks for this subsection as expected structure is unknown.\r\n```",
"Thanks a lot Albert :)))"
] | "2022-08-13T14:38:55Z" | "2022-08-17T00:50:04Z" | "2022-08-14T10:26:08Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4846.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4846",
"merged_at": "2022-08-14T10:26:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4846.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4846"
} | Hi !
Paper has been published at EMNLP. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4846/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4846/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4536 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4536/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4536/comments | https://api.github.com/repos/huggingface/datasets/issues/4536/events | https://github.com/huggingface/datasets/pull/4536 | 1,278,734,727 | PR_kwDODunzps46C2z6 | 4,536 | Properly raise FileNotFound even if the dataset is private | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-06-21T17:05:50Z" | "2022-06-28T10:46:51Z" | "2022-06-28T10:36:10Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4536.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4536",
"merged_at": "2022-06-28T10:36:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4536.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4536"
} | `tests/test_load.py::test_load_streaming_private_dataset` was failing because the hub now returns 401 when getting the HfApi.dataset_info of a dataset without authentication. `load_dataset` was raising ConnectionError, while it should be FileNoteFoundError since it first checks for local files before checking the Hub.
Moreover when use_auth_token is not set (default is False), we should not pass `token=None` to HfApi.dataset_info, or it will use the local token by default - instead it should use no token. It's currently not possible to ask for no token to be used, so as a workaround I simply set token="no-token" | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4536/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4536/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1734 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1734/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1734/comments | https://api.github.com/repos/huggingface/datasets/issues/1734/events | https://github.com/huggingface/datasets/pull/1734 | 784,956,707 | MDExOlB1bGxSZXF1ZXN0NTU0MDYxMzMz | 1,734 | Fix empty token bug for `thainer` and `lst20` | {
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cstorm125",
"id": 15519308,
"login": "cstorm125",
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cstorm125"
} | [] | closed | false | null | [] | null | [] | "2021-01-13T09:55:09Z" | "2021-01-14T10:42:18Z" | "2021-01-14T10:42:18Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1734.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1734",
"merged_at": "2021-01-14T10:42:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1734.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1734"
} | add a condition to check if tokens exist before yielding in `thainer` and `lst20` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1734/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1734/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6086/comments | https://api.github.com/repos/huggingface/datasets/issues/6086/events | https://github.com/huggingface/datasets/issues/6086 | 1,825,009,268 | I_kwDODunzps5sx250 | 6,086 | Support `fsspec` in `Dataset.to_<format>` methods | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
}
] | null | [
"Hi @mariosasko unless someone's already working on it, I guess I can tackle it!",
"Hi! Sure, feel free to tackle this.",
"#self-assign",
"I'm assuming this should just cover `to_csv`, `to_parquet`, and `to_json`, right? As `to_list` and `to_dict` just return Python objects, `to_pandas` returns a `pandas.DataFrame` and `to_sql` just inserts into a SQL DB, is that right?"
] | "2023-07-27T19:08:37Z" | "2023-07-28T15:28:26Z" | null | CONTRIBUTOR | null | null | null | Supporting this should be fairly easy.
Requested on the forum [here](https://discuss.huggingface.co/t/how-can-i-convert-a-loaded-dataset-in-to-a-parquet-file-and-save-it-to-the-s3/48353). | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6086/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6086/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1770/comments | https://api.github.com/repos/huggingface/datasets/issues/1770/events | https://github.com/huggingface/datasets/issues/1770 | 792,698,148 | MDU6SXNzdWU3OTI2OTgxNDg= | 1,770 | how can I combine 2 dataset with different/same features? | {
"avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4",
"events_url": "https://api.github.com/users/world2vec/events{/privacy}",
"followers_url": "https://api.github.com/users/world2vec/followers",
"following_url": "https://api.github.com/users/world2vec/following{/other_user}",
"gists_url": "https://api.github.com/users/world2vec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/world2vec",
"id": 7607120,
"login": "world2vec",
"node_id": "MDQ6VXNlcjc2MDcxMjA=",
"organizations_url": "https://api.github.com/users/world2vec/orgs",
"received_events_url": "https://api.github.com/users/world2vec/received_events",
"repos_url": "https://api.github.com/users/world2vec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/world2vec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/world2vec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/world2vec"
} | [] | closed | false | null | [] | null | [
"Hi ! Currently we don't have a way to `zip` datasets but we plan to add this soon :)\r\nFor now you'll need to use `map` to add the fields from one dataset to the other. See the comment here for more info : https://github.com/huggingface/datasets/issues/853#issuecomment-727872188",
"Good to hear.\r\nCurrently I did not use map , just fetch src and tgt from the 2 dataset and merge them.\r\nIt will be a release if you can deal with it at the backend.\r\nThanks.",
"Hi! You can rename the columns and concatenate the datasets along `axis=1` to get the desired result as follows:\r\n```python\r\nds1 = ds1.rename_column(\"text\", \"src\")\r\nds2 = ds2.rename_column(\"text\", \"tgt\")\r\nds = datasets.concatenate_datasets([\"ds1\", \"ds2\"], axis=1)\r\n```"
] | "2021-01-24T01:26:06Z" | "2022-06-01T15:43:15Z" | "2022-06-01T15:43:15Z" | NONE | null | null | null | to combine 2 dataset by one-one map like ds = zip(ds1, ds2):
ds1: {'text'}, ds2: {'text'}, combine ds:{'src', 'tgt'}
or different feature:
ds1: {'src'}, ds2: {'tgt'}, combine ds:{'src', 'tgt'} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1770/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1770/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4199/comments | https://api.github.com/repos/huggingface/datasets/issues/4199/events | https://github.com/huggingface/datasets/issues/4199 | 1,211,953,308 | I_kwDODunzps5IPPCc | 4,199 | Cache miss during reload for datasets using image fetch utilities through map | {
"avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4",
"events_url": "https://api.github.com/users/apsdehal/events{/privacy}",
"followers_url": "https://api.github.com/users/apsdehal/followers",
"following_url": "https://api.github.com/users/apsdehal/following{/other_user}",
"gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/apsdehal",
"id": 3616806,
"login": "apsdehal",
"node_id": "MDQ6VXNlcjM2MTY4MDY=",
"organizations_url": "https://api.github.com/users/apsdehal/orgs",
"received_events_url": "https://api.github.com/users/apsdehal/received_events",
"repos_url": "https://api.github.com/users/apsdehal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/apsdehal"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
] | null | [
"Hi ! Maybe one of the objects in the function is not deterministic across sessions ? You can read more about it and how to investigate here: https://huggingface.co/docs/datasets/about_cache",
"Hi @apsdehal! Can you verify that replacing\r\n```python\r\ndef fetch_single_image(image_url, timeout=None, retries=0):\r\n for _ in range(retries + 1):\r\n try:\r\n request = urllib.request.Request(\r\n image_url,\r\n data=None,\r\n headers={\"user-agent\": get_datasets_user_agent()},\r\n )\r\n with urllib.request.urlopen(request, timeout=timeout) as req:\r\n image = PIL.Image.open(io.BytesIO(req.read()))\r\n break\r\n except Exception:\r\n image = None\r\n return image\r\n```\r\nwith \r\n```python\r\nUSER_AGENT = get_datasets_user_agent()\r\n\r\ndef fetch_single_image(image_url, timeout=None, retries=0):\r\n for _ in range(retries + 1):\r\n try:\r\n request = urllib.request.Request(\r\n image_url,\r\n data=None,\r\n headers={\"user-agent\": USER_AGENT},\r\n )\r\n with urllib.request.urlopen(request, timeout=timeout) as req:\r\n image = PIL.Image.open(io.BytesIO(req.read()))\r\n break\r\n except Exception:\r\n image = None\r\n return image\r\n```\r\nfixes the issue?",
"Thanks @mariosasko. That does fix the issue. In general, I think these image downloading utilities since they are being used by a lot of image dataset should be provided as a part of `datasets` library right to keep the logic consistent and READMEs smaller? If they already exists, that is also great, please point me to those. I saw that `http_get` does exist.",
"You can find my rationale (and a proposed solution) for why these utilities are not a part of `datasets` here: https://github.com/huggingface/datasets/pull/4100#issuecomment-1097994003.",
"Makes sense. But, I think as the number of image datasets as grow, more people are copying pasting original code from docs to work as it is while we make fixes to them later. I think we do need a central place for these to avoid that confusion as well as more easier access to image datasets. Should we restart that discussion, possible on slack?"
] | "2022-04-22T07:47:08Z" | "2022-04-26T17:00:32Z" | "2022-04-26T13:38:26Z" | CONTRIBUTOR | null | null | null | ## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch.
## Steps to reproduce the bug
Using the example provided in `red_caps` dataset.
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
import datasets
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": get_datasets_user_agent()},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(lambda image_urls: [fetch_single_image_with_args(image_url) for image_url in image_urls], batch["image_url"]))
return batch
def process_image_urls(batch):
processed_batch_image_urls = []
for image_url in batch["image_url"]:
processed_example_image_urls = []
image_url_splits = re.findall(r"http\S+", image_url)
for image_url_split in image_url_splits:
if "imgur" in image_url_split and "," in image_url_split:
for image_url_part in image_url_split.split(","):
if not image_url_part:
continue
image_url_part = image_url_part.strip()
root, ext = os.path.splitext(image_url_part)
if not root.startswith("http"):
root = "http://i.imgur.com/" + root
root = root.split("#")[0]
if not ext:
ext = ".jpg"
ext = re.split(r"[?%]", ext)[0]
image_url_part = root + ext
processed_example_image_urls.append(image_url_part)
else:
processed_example_image_urls.append(image_url_split)
processed_batch_image_urls.append(processed_example_image_urls)
batch["image_url"] = processed_batch_image_urls
return batch
dset = load_dataset("red_caps", "jellyfish")
dset = dset.map(process_image_urls, batched=True, num_proc=4)
features = dset["train"].features.copy()
features["image"] = datasets.Sequence(datasets.Image())
num_threads = 5
dset = dset.map(fetch_images, batched=True, batch_size=50, features=features, fn_kwargs={"num_threads": num_threads})
```
Run this in an interpretor or as a script twice and see that the cache is missed the second time.
## Expected results
At reload there should not be any cache miss
## Actual results
Every time script is run, cache is missed and dataset is built from scratch.
## Environment info
- `datasets` version: 2.1.1.dev0
- Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4199/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4199/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5793 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5793/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5793/comments | https://api.github.com/repos/huggingface/datasets/issues/5793/events | https://github.com/huggingface/datasets/issues/5793 | 1,684,777,320 | I_kwDODunzps5ka6lo | 5,793 | IterableDataset.with_format("torch") not working | {
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"events_url": "https://api.github.com/users/jiangwangyi/events{/privacy}",
"followers_url": "https://api.github.com/users/jiangwangyi/followers",
"following_url": "https://api.github.com/users/jiangwangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwangyi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jiangwangyi",
"id": 39762734,
"login": "jiangwangyi",
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"organizations_url": "https://api.github.com/users/jiangwangyi/orgs",
"received_events_url": "https://api.github.com/users/jiangwangyi/received_events",
"repos_url": "https://api.github.com/users/jiangwangyi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jiangwangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwangyi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jiangwangyi"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"Hi ! Thanks for reporting, I'm working on it ;)"
] | "2023-04-26T10:50:23Z" | "2023-06-13T15:57:06Z" | "2023-06-13T15:57:06Z" | NONE | null | null | null | ### Describe the bug
After calling the with_format("torch") method on an IterableDataset instance, the data format is unchanged.
### Steps to reproduce the bug
```python
from datasets import IterableDataset
def gen():
for i in range(4):
yield {"a": [i] * 4}
dataset = IterableDataset.from_generator(gen).with_format("torch")
next(iter(dataset))
```
### Expected behavior
`{"a": torch.tensor([0, 0, 0, 0])}` is expected, but `{"a": [0, 0, 0, 0]}` is observed.
### Environment info
```bash
platform==ubuntu 22.04.01
python==3.10.9
datasets==2.11.0
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5793/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5793/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1818 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1818/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1818/comments | https://api.github.com/repos/huggingface/datasets/issues/1818/events | https://github.com/huggingface/datasets/issues/1818 | 800,958,776 | MDU6SXNzdWU4MDA5NTg3NzY= | 1,818 | Loading local dataset raise requests.exceptions.ConnectTimeout | {
"avatar_url": "https://avatars.githubusercontent.com/u/15032072?v=4",
"events_url": "https://api.github.com/users/Alxe1/events{/privacy}",
"followers_url": "https://api.github.com/users/Alxe1/followers",
"following_url": "https://api.github.com/users/Alxe1/following{/other_user}",
"gists_url": "https://api.github.com/users/Alxe1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Alxe1",
"id": 15032072,
"login": "Alxe1",
"node_id": "MDQ6VXNlcjE1MDMyMDcy",
"organizations_url": "https://api.github.com/users/Alxe1/orgs",
"received_events_url": "https://api.github.com/users/Alxe1/received_events",
"repos_url": "https://api.github.com/users/Alxe1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Alxe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Alxe1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Alxe1"
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting. This was indeed a bug introduced when we moved the `json` dataset loader inside the `datasets` package (before that, the `json` loader was fetched online, as all the other dataset scripts).\r\n\r\nThis should be fixed on master now. Feel free to install `datasets` from source to try it out.\r\nThe fix will be available in the next release of `datasets` in a few days"
] | "2021-02-04T05:55:23Z" | "2022-06-01T15:38:42Z" | "2022-06-01T15:38:42Z" | NONE | null | null | null | Load local dataset:
```
dataset = load_dataset('json', data_files=["../../data/json.json"])
train = dataset["train"]
print(train.features)
train1 = train.map(lambda x: {"labels": 1})
print(train1[:2])
```
but it raised requests.exceptions.ConnectTimeout:
```
/Users/littlely/myvirtual/tf2/bin/python3.7 /Users/littlely/projects/python_projects/pytorch_learning/nlp/dataset/transformers_datasets.py
Traceback (most recent call last):
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connection.py", line 160, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/util/connection.py", line 84, in create_connection
raise err
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/util/connection.py", line 74, in create_connection
sock.connect(sa)
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 677, in urlopen
chunked=chunked,
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 381, in _make_request
self._validate_conn(conn)
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 978, in _validate_conn
conn.connect()
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connection.py", line 309, in connect
conn = self._new_conn()
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connection.py", line 167, in _new_conn
% (self.host, self.timeout),
urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x1181e9940>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 727, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/util/retry.py", line 439, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/json/json.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x1181e9940>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/littlely/projects/python_projects/pytorch_learning/nlp/dataset/transformers_datasets.py", line 12, in <module>
dataset = load_dataset('json', data_files=["../../data/json.json"])
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/load.py", line 591, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/load.py", line 263, in prepare_module
head_hf_s3(path, filename=name, dataset=dataset, max_retries=download_config.max_retries)
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 232, in head_hf_s3
max_retries=max_retries,
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 523, in http_head
max_retries=max_retries,
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 458, in _request_with_retry
raise err
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 454, in _request_with_retry
response = requests.request(verb.upper(), url, **params)
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/adapters.py", line 504, in send
raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/json/json.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x1181e9940>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)'))
Process finished with exit code 1
```
Why it want to connect a remote url when I load local datasets, and how can I fix it? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1818/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1818/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4538/comments | https://api.github.com/repos/huggingface/datasets/issues/4538/events | https://github.com/huggingface/datasets/issues/4538 | 1,279,409,786 | I_kwDODunzps5MQj56 | 4,538 | Dataset Viewer issue for Pile of Law | {
"avatar_url": "https://avatars.githubusercontent.com/u/1609857?v=4",
"events_url": "https://api.github.com/users/Breakend/events{/privacy}",
"followers_url": "https://api.github.com/users/Breakend/followers",
"following_url": "https://api.github.com/users/Breakend/following{/other_user}",
"gists_url": "https://api.github.com/users/Breakend/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Breakend",
"id": 1609857,
"login": "Breakend",
"node_id": "MDQ6VXNlcjE2MDk4NTc=",
"organizations_url": "https://api.github.com/users/Breakend/orgs",
"received_events_url": "https://api.github.com/users/Breakend/received_events",
"repos_url": "https://api.github.com/users/Breakend/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Breakend/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Breakend/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Breakend"
} | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
] | null | [
"Hi @Breakend, yes – we'll propose a solution today",
"Thanks so much, I appreciate it!",
"Thanks so much for adding the docs. I was able to successfully hide the viewer using the \r\n```\r\nviewer: false\r\n```\r\nflag in the README.md of the dataset. I'm closing the issue because this is resolved. Thanks again!",
"Awesome! Thanks for confirming. cc @severo ",
"Just for the record:\r\n\r\n- the doc\r\n \r\n<img width=\"1430\" alt=\"Capture d’écran 2022-06-27 à 09 29 27\" src=\"https://user-images.githubusercontent.com/1676121/175884089-bca6c0d5-6387-473e-98ca-86a910ede4bd.png\">\r\n\r\n- the dataset main page\r\n\r\n<img width=\"1134\" alt=\"Capture d’écran 2022-06-27 à 09 29 05\" src=\"https://user-images.githubusercontent.com/1676121/175884152-5f285bf0-3471-45de-927a-e141b00ebb33.png\">\r\n\r\n- the dataset viewer page\r\n\r\n<img width=\"567\" alt=\"Capture d’écran 2022-06-27 à 09 29 16\" src=\"https://user-images.githubusercontent.com/1676121/175884191-ab6a297b-1c11-417e-bbde-0b7623278a79.png\">\r\n"
] | "2022-06-22T02:48:40Z" | "2022-06-27T07:30:23Z" | "2022-06-26T22:26:22Z" | NONE | null | null | null | ### Link
https://huggingface.co/datasets/pile-of-law/pile-of-law
### Description
Hi, I would like to turn off the dataset viewer for our dataset without enabling access requests. To comply with upstream dataset creator requests/licenses, we would like to make sure that the data is not indexed by search engines and so would like to turn off dataset previews. But we do not want to collect user emails because it would violate single blind review, allowing us to deduce potential reviewers' identities. Is there a way that we can turn off the dataset viewer without collecting identity information?
Thanks so much!
### Owner
Yes | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4538/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4538/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3088/comments | https://api.github.com/repos/huggingface/datasets/issues/3088/events | https://github.com/huggingface/datasets/pull/3088 | 1,026,920,369 | PR_kwDODunzps4tOhRx | 3,088 | Use template column_mapping to transmit_format instead of template features | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"Thanks for fixing!"
] | "2021-10-14T23:49:40Z" | "2021-10-15T14:40:05Z" | "2021-10-15T10:11:04Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3088.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3088",
"merged_at": "2021-10-15T10:11:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3088.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3088"
} | Use `template.column_mapping` to check for modified columns since `template.features` represent a generic template/column mapping.
Fix #3087
TODO:
- [x] Add a test | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3088/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3088/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2909 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2909/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2909/comments | https://api.github.com/repos/huggingface/datasets/issues/2909/events | https://github.com/huggingface/datasets/pull/2909 | 996,002,180 | PR_kwDODunzps4rutdo | 2,909 | fix anli splits | {
"avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4",
"events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}",
"followers_url": "https://api.github.com/users/zaidalyafeai/followers",
"following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}",
"gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zaidalyafeai",
"id": 15667714,
"login": "zaidalyafeai",
"node_id": "MDQ6VXNlcjE1NjY3NzE0",
"organizations_url": "https://api.github.com/users/zaidalyafeai/orgs",
"received_events_url": "https://api.github.com/users/zaidalyafeai/received_events",
"repos_url": "https://api.github.com/users/zaidalyafeai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zaidalyafeai"
} | [] | closed | false | null | [] | null | [] | "2021-09-14T13:10:35Z" | "2021-10-13T11:27:49Z" | "2021-10-13T11:27:49Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2909.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2909",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2909.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2909"
} | I can't run the tests for dummy data, facing this error
`ImportError while loading conftest '/home/zaid/tmp/fix_anli_splits/datasets/tests/conftest.py'.
tests/conftest.py:10: in <module>
from datasets import config
E ImportError: cannot import name 'config' from 'datasets' (unknown location)` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2909/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2909/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2812/comments | https://api.github.com/repos/huggingface/datasets/issues/2812/events | https://github.com/huggingface/datasets/issues/2812 | 972,936,889 | MDU6SXNzdWU5NzI5MzY4ODk= | 2,812 | arXiv Dataset verification problem | {
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eladsegal",
"id": 13485709,
"login": "eladsegal",
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eladsegal"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | open | false | null | [] | null | [] | "2021-08-17T18:01:48Z" | "2022-01-19T14:15:35Z" | null | CONTRIBUTOR | null | null | null | ## Describe the bug
`dataset_infos.json` for `arxiv_dataset` contains a fixed number of training examples, however the data (downloaded from an external source) is updated every week with additional examples.
Therefore, loading the dataset without `ignore_verifications=True` results in a verification error. | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2812/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2812/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5772/comments | https://api.github.com/repos/huggingface/datasets/issues/5772/events | https://github.com/huggingface/datasets/pull/5772 | 1,675,033,510 | PR_kwDODunzps5OreXV | 5,772 | Fix JSON builder when missing keys in first row | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009262 / 0.011353 (-0.002091) | 0.006157 / 0.011008 (-0.004851) | 0.125960 / 0.038508 (0.087451) | 0.036213 / 0.023109 (0.013104) | 0.399331 / 0.275898 (0.123433) | 0.453597 / 0.323480 (0.130117) | 0.006990 / 0.007986 (-0.000995) | 0.007320 / 0.004328 (0.002991) | 0.100321 / 0.004250 (0.096070) | 0.048870 / 0.037052 (0.011818) | 0.396284 / 0.258489 (0.137795) | 0.475619 / 0.293841 (0.181778) | 0.052329 / 0.128546 (-0.076217) | 0.019564 / 0.075646 (-0.056083) | 0.430942 / 0.419271 (0.011670) | 0.063224 / 0.043533 (0.019692) | 0.391717 / 0.255139 (0.136578) | 0.448342 / 0.283200 (0.165142) | 0.114055 / 0.141683 (-0.027628) | 1.793204 / 1.452155 (0.341049) | 1.895151 / 1.492716 (0.402435) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.283699 / 0.018006 (0.265693) | 0.597194 / 0.000490 (0.596704) | 0.007143 / 0.000200 (0.006944) | 0.000602 / 0.000054 (0.000548) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034761 / 0.037411 (-0.002651) | 0.124555 / 0.014526 (0.110030) | 0.149126 / 0.176557 (-0.027430) | 0.220335 / 0.737135 (-0.516801) | 0.153109 / 0.296338 (-0.143229) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.620210 / 0.215209 (0.405001) | 6.229937 / 2.077655 (4.152282) | 2.615203 / 1.504120 (1.111083) | 2.239337 / 1.541195 (0.698143) | 2.262138 / 1.468490 (0.793648) | 1.196498 / 4.584777 (-3.388279) | 5.609932 / 3.745712 (1.864220) | 3.031347 / 5.269862 (-2.238515) | 2.025530 / 4.565676 (-2.540146) | 0.139828 / 0.424275 (-0.284447) | 0.015476 / 0.007607 (0.007869) | 0.768964 / 0.226044 (0.542920) | 7.728677 / 2.268929 (5.459748) | 3.336407 / 55.444624 (-52.108217) | 2.700055 / 6.876477 (-4.176422) | 2.765223 / 2.142072 (0.623151) | 1.409073 / 4.805227 (-3.396155) | 0.246849 / 6.500664 (-6.253815) | 0.081231 / 0.075469 (0.005762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.593836 / 1.841788 (-0.247952) | 18.020525 / 8.074308 (9.946216) | 21.766822 / 10.191392 (11.575430) | 0.258615 / 0.680424 (-0.421809) | 0.026895 / 0.534201 (-0.507306) | 0.529823 / 0.579283 (-0.049460) | 0.623470 / 0.434364 (0.189106) | 0.628171 / 0.540337 (0.087833) | 0.745249 / 1.386936 (-0.641687) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008624 / 0.011353 (-0.002729) | 0.006317 / 0.011008 (-0.004691) | 0.097315 / 0.038508 (0.058807) | 0.035217 / 0.023109 (0.012108) | 0.440197 / 0.275898 (0.164299) | 0.473863 / 0.323480 (0.150383) | 0.006722 / 0.007986 (-0.001264) | 0.006444 / 0.004328 (0.002116) | 0.102056 / 0.004250 (0.097806) | 0.047142 / 0.037052 (0.010089) | 0.452476 / 0.258489 (0.193986) | 0.487619 / 0.293841 (0.193778) | 0.052456 / 0.128546 (-0.076090) | 0.018735 / 0.075646 (-0.056911) | 0.114656 / 0.419271 (-0.304616) | 0.062577 / 0.043533 (0.019044) | 0.444471 / 0.255139 (0.189332) | 0.494264 / 0.283200 (0.211065) | 0.117112 / 0.141683 (-0.024571) | 1.848965 / 1.452155 (0.396810) | 1.984008 / 1.492716 (0.491292) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.290494 / 0.018006 (0.272488) | 0.588415 / 0.000490 (0.587925) | 0.000459 / 0.000200 (0.000259) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032873 / 0.037411 (-0.004538) | 0.131139 / 0.014526 (0.116614) | 0.140268 / 0.176557 (-0.036289) | 0.204561 / 0.737135 (-0.532574) | 0.147443 / 0.296338 (-0.148895) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.636899 / 0.215209 (0.421690) | 6.236139 / 2.077655 (4.158484) | 2.801468 / 1.504120 (1.297348) | 2.398808 / 1.541195 (0.857613) | 2.493150 / 1.468490 (1.024659) | 1.228845 / 4.584777 (-3.355932) | 5.675874 / 3.745712 (1.930162) | 3.084939 / 5.269862 (-2.184922) | 2.061310 / 4.565676 (-2.504367) | 0.142285 / 0.424275 (-0.281990) | 0.014972 / 0.007607 (0.007365) | 0.786599 / 0.226044 (0.560555) | 7.876036 / 2.268929 (5.607107) | 3.476136 / 55.444624 (-51.968489) | 2.847922 / 6.876477 (-4.028555) | 3.040326 / 2.142072 (0.898253) | 1.448538 / 4.805227 (-3.356690) | 0.257230 / 6.500664 (-6.243434) | 0.085137 / 0.075469 (0.009668) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.668173 / 1.841788 (-0.173615) | 18.668520 / 8.074308 (10.594212) | 20.535542 / 10.191392 (10.344150) | 0.244580 / 0.680424 (-0.435844) | 0.026364 / 0.534201 (-0.507837) | 0.531753 / 0.579283 (-0.047530) | 0.616578 / 0.434364 (0.182214) | 0.618906 / 0.540337 (0.078569) | 0.738785 / 1.386936 (-0.648151) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f7265cafa3103d77d6d52aa897088faefcd96659 \"CML watermark\")\n"
] | "2023-04-19T14:32:57Z" | "2023-04-21T06:45:13Z" | "2023-04-21T06:35:27Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5772.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5772",
"merged_at": "2023-04-21T06:35:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5772.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5772"
} | Until now, the JSON builder only considered the keys present in the first element of the list:
- Either explicitly: by passing index 0 in `dataset[0].keys()`
- Or implicitly: `pa.Table.from_pylist(dataset)`, where "schema (default None): If not passed, will be inferred from the first row of the mapping values"
This PR fixes the bug by considering the union of the keys present in all the rows.
Fix #5726. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5772/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5772/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2925/comments | https://api.github.com/repos/huggingface/datasets/issues/2925/events | https://github.com/huggingface/datasets/pull/2925 | 997,407,034 | PR_kwDODunzps4rzJ9s | 2,925 | Add tutorial for no-code dataset upload | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | [
"Cool, love it ! :)\r\n\r\nFeel free to add a paragraph saying how to load the dataset:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"stevhliu/demo\")\r\n\r\n# or to separate each csv file into several splits\r\ndata_files = {\"train\": \"train.csv\", \"test\": \"test.csv\"}\r\ndataset = load_dataset(\"stevhliu/demo\", data_files=data_files)\r\nprint(dataset[\"train\"][0])\r\n```",
"Perfect, feel free to mark this PR ready for review :)\r\n\r\ncc @albertvillanova do you have any comment ? You can check the tutorial here:\r\nhttps://47389-250213286-gh.circle-artifacts.com/0/docs/_build/html/no_code_upload.html\r\n\r\nMaybe we can just add a list of supported file types:\r\n- csv\r\n- json\r\n- json lines\r\n- text\r\n- parquet",
"I just added a mention of the login for private datasets. Don't hesitate to edit or comment.\r\n\r\nOtherwise I think it's all good, feel free to merge it @stevhliu if you don't have other changes to make :)"
] | "2021-09-15T18:54:42Z" | "2021-09-27T17:51:55Z" | "2021-09-27T17:51:55Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2925.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2925",
"merged_at": "2021-09-27T17:51:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2925.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2925"
} | This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2925/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2925/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3041/comments | https://api.github.com/repos/huggingface/datasets/issues/3041/events | https://github.com/huggingface/datasets/pull/3041 | 1,018,911,385 | PR_kwDODunzps4s1ZAc | 3,041 | Load private data files + use glob on ZIP archives for json/csv/etc. module inference | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"I have an error on windows:\r\n```python\r\naiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host moon-staging.huggingface.co:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')]\r\n```\r\nat the `fsspec` call in `xglob`:\r\n```python\r\nfs, *_ = fsspec.get_fs_token_paths(urlpath, storage_options=storage_options)\r\n```\r\n\r\nLooks like the windows CI has an SSL issue... ",
"I can reproduce it on my windows machine. On linux it works fine though",
"I'm just skipping the windows test for now",
"The Windows CI failure seems unrelated to this PR\r\n```python\r\nERROR tests/test_arrow_dataset.py::test_dummy_dataset_serialize_s3\r\n```"
] | "2021-10-06T18:16:36Z" | "2021-10-12T15:25:48Z" | "2021-10-12T15:25:46Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3041.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3041",
"merged_at": "2021-10-12T15:25:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3041.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3041"
} | As mentioned in https://github.com/huggingface/datasets/issues/3032 loading data files from private repository isn't working correctly because of the data files resolved.
#2986 did a refactor of the data files resolver. I added authentication to it.
I also improved it to glob inside ZIP archives to look for json/csv/etc. files and infer which dataset builder (json/csv/etc.) to use.
Fix https://github.com/huggingface/datasets/issues/3032
Note that #2986 needs to get merged first | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3041/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3041/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1413 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1413/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1413/comments | https://api.github.com/repos/huggingface/datasets/issues/1413/events | https://github.com/huggingface/datasets/pull/1413 | 760,615,090 | MDExOlB1bGxSZXF1ZXN0NTM1NDE4MDY2 | 1,413 | Add OffComBR | {
"avatar_url": "https://avatars.githubusercontent.com/u/1206395?v=4",
"events_url": "https://api.github.com/users/hugoabonizio/events{/privacy}",
"followers_url": "https://api.github.com/users/hugoabonizio/followers",
"following_url": "https://api.github.com/users/hugoabonizio/following{/other_user}",
"gists_url": "https://api.github.com/users/hugoabonizio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hugoabonizio",
"id": 1206395,
"login": "hugoabonizio",
"node_id": "MDQ6VXNlcjEyMDYzOTU=",
"organizations_url": "https://api.github.com/users/hugoabonizio/orgs",
"received_events_url": "https://api.github.com/users/hugoabonizio/received_events",
"repos_url": "https://api.github.com/users/hugoabonizio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hugoabonizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hugoabonizio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hugoabonizio"
} | [] | closed | false | null | [] | null | [
"Hello @hugoabonizio, thanks for the contribution.\r\nRegarding the fake data, you can generate it manually.\r\nRunning the `python datasets-cli dummy_data datasets/offcombr` should give you instructions on how to manually create the dummy data.\r\nFor reference, here is a spec for `.arff` files : https://www.cs.waikato.ac.nz/ml/weka/arff.html",
"@lhoestq again the failing tests doesn't seem to be related",
"merging since the CI is fixed on master"
] | "2020-12-09T19:38:08Z" | "2020-12-14T18:06:45Z" | "2020-12-14T16:51:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1413.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1413",
"merged_at": "2020-12-14T16:51:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1413.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1413"
} | Add [OffComBR](https://github.com/rogersdepelle/OffComBR) from [Offensive Comments in the Brazilian Web: a dataset and baseline results](https://sol.sbc.org.br/index.php/brasnam/article/view/3260/3222) paper.
But I'm having a hard time generating dummy data since the original dataset extion is `.arff` and the [_create_dummy_data function](https://github.com/huggingface/datasets/blob/a4aeaf911240057286a01bff1b1d75a89aedd57b/src/datasets/commands/dummy_data.py#L185) doesn't allow it. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1413/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1413/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3682 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3682/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3682/comments | https://api.github.com/repos/huggingface/datasets/issues/3682/events | https://github.com/huggingface/datasets/pull/3682 | 1,124,434,330 | PR_kwDODunzps4yGFml | 3,682 | adding told-br for toxic/abusive hatespeech detection | {
"avatar_url": "https://avatars.githubusercontent.com/u/26556320?v=4",
"events_url": "https://api.github.com/users/JAugusto97/events{/privacy}",
"followers_url": "https://api.github.com/users/JAugusto97/followers",
"following_url": "https://api.github.com/users/JAugusto97/following{/other_user}",
"gists_url": "https://api.github.com/users/JAugusto97/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JAugusto97",
"id": 26556320,
"login": "JAugusto97",
"node_id": "MDQ6VXNlcjI2NTU2MzIw",
"organizations_url": "https://api.github.com/users/JAugusto97/orgs",
"received_events_url": "https://api.github.com/users/JAugusto97/received_events",
"repos_url": "https://api.github.com/users/JAugusto97/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JAugusto97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JAugusto97/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JAugusto97"
} | [] | closed | false | null | [] | null | [
"Sorry for using multiple github accounts, I didn't notice I was using my professional account to commit/push. Please consider this @JAugusto97 account as the correct one.",
"Will remake the PR with the correct github account."
] | "2022-02-04T17:18:29Z" | "2022-02-07T03:23:24Z" | "2022-02-04T17:36:40Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3682.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3682",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3682.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3682"
} | Hey,
I'm adding our dataset from our paper published at AACL 2020. Feel free to ask for modifications.
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3682/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3682/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2804/comments | https://api.github.com/repos/huggingface/datasets/issues/2804/events | https://github.com/huggingface/datasets/pull/2804 | 971,353,437 | MDExOlB1bGxSZXF1ZXN0NzEzMTA2NTMw | 2,804 | Add Food-101 | {
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nateraw",
"id": 32437151,
"login": "nateraw",
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"repos_url": "https://api.github.com/users/nateraw/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nateraw"
} | [] | closed | false | null | [] | null | [] | "2021-08-16T04:26:15Z" | "2021-08-20T14:31:33Z" | "2021-08-19T12:48:06Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2804.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2804",
"merged_at": "2021-08-19T12:48:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2804.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2804"
} | Adds image classification dataset [Food-101](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2804/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2804/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1834/comments | https://api.github.com/repos/huggingface/datasets/issues/1834/events | https://github.com/huggingface/datasets/pull/1834 | 803,517,094 | MDExOlB1bGxSZXF1ZXN0NTY5NDMzNDA4 | 1,834 | Fixes base_url of limit dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4",
"events_url": "https://api.github.com/users/Paethon/events{/privacy}",
"followers_url": "https://api.github.com/users/Paethon/followers",
"following_url": "https://api.github.com/users/Paethon/following{/other_user}",
"gists_url": "https://api.github.com/users/Paethon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Paethon",
"id": 237550,
"login": "Paethon",
"node_id": "MDQ6VXNlcjIzNzU1MA==",
"organizations_url": "https://api.github.com/users/Paethon/orgs",
"received_events_url": "https://api.github.com/users/Paethon/received_events",
"repos_url": "https://api.github.com/users/Paethon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Paethon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Paethon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Paethon"
} | [] | closed | false | null | [] | null | [
"OK, apparently it is a lot more complicated than simply changing the URL? Going to make an issue."
] | "2021-02-08T12:26:35Z" | "2021-02-08T12:42:50Z" | "2021-02-08T12:42:50Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1834.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1834",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1834.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1834"
} | `test.json` is not available in the master branch of the repository anymore. Linking to a specific commit. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1834/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1834/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4859 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4859/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4859/comments | https://api.github.com/repos/huggingface/datasets/issues/4859/events | https://github.com/huggingface/datasets/issues/4859 | 1,342,231,016 | I_kwDODunzps5QANHo | 4,859 | can't install using conda on Windows 10 | {
"avatar_url": "https://avatars.githubusercontent.com/u/22627691?v=4",
"events_url": "https://api.github.com/users/xoffey/events{/privacy}",
"followers_url": "https://api.github.com/users/xoffey/followers",
"following_url": "https://api.github.com/users/xoffey/following{/other_user}",
"gists_url": "https://api.github.com/users/xoffey/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xoffey",
"id": 22627691,
"login": "xoffey",
"node_id": "MDQ6VXNlcjIyNjI3Njkx",
"organizations_url": "https://api.github.com/users/xoffey/orgs",
"received_events_url": "https://api.github.com/users/xoffey/received_events",
"repos_url": "https://api.github.com/users/xoffey/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xoffey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xoffey/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xoffey"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [] | "2022-08-17T19:57:37Z" | "2022-08-17T19:57:37Z" | null | NONE | null | null | null | ## Describe the bug
I wanted to install using conda or Anaconda navigator. That didn't work, so I had to install using pip.
## Steps to reproduce the bug
conda install -c huggingface -c conda-forge datasets
## Expected results
Should have indicated successful installation.
## Actual results
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
... took forever, so I cancelled it with ctrl-c
## Environment info
- `datasets` version: 2.4.0 # after installing with pip
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.12
- PyArrow version: 9.0.0
- Pandas version: 1.4.2
- conda version: 4.13.0
conda info
active environment : base
active env location : G:\anaconda2022
shell level : 1
user config file : C:\Users\michael\.condarc
populated config files : C:\Users\michael\.condarc
conda version : 4.13.0
conda-build version : 3.21.8
python version : 3.9.12.final.0
virtual packages : __cuda=11.1=0
__win=0=0
__archspec=1=x86_64
base environment : G:\anaconda2022 (writable)
conda av data dir : G:\anaconda2022\etc\conda
conda av metadata url : None
channel URLs : https://conda.anaconda.org/pytorch/win-64
https://conda.anaconda.org/pytorch/noarch
https://conda.anaconda.org/huggingface/win-64
https://conda.anaconda.org/huggingface/noarch
https://conda.anaconda.org/conda-forge/win-64
https://conda.anaconda.org/conda-forge/noarch
https://conda.anaconda.org/anaconda-fusion/win-64
https://conda.anaconda.org/anaconda-fusion/noarch
https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
package cache : G:\anaconda2022\pkgs
C:\Users\michael\.conda\pkgs
C:\Users\michael\AppData\Local\conda\conda\pkgs
envs directories : G:\anaconda2022\envs
C:\Users\michael\.conda\envs
C:\Users\michael\AppData\Local\conda\conda\envs
platform : win-64
user-agent : conda/4.13.0 requests/2.27.1 CPython/3.9.12 Windows/10 Windows/10.0.19044
administrator : False
netrc file : None
offline mode : False
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4859/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4859/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3852/comments | https://api.github.com/repos/huggingface/datasets/issues/3852/events | https://github.com/huggingface/datasets/pull/3852 | 1,162,252,337 | PR_kwDODunzps40Fb26 | 3,852 | Redundant add dataset information and dead link. | {
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dnaveenr",
"id": 17746528,
"login": "dnaveenr",
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dnaveenr"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3852). All of your documentation changes will be reflected on that endpoint."
] | "2022-03-08T05:57:05Z" | "2022-03-08T16:54:36Z" | "2022-03-08T16:54:36Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3852.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3852",
"merged_at": "2022-03-08T16:54:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3852.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3852"
} | > Alternatively, you can follow the steps to [add a dataset](https://huggingface.co/docs/datasets/add_dataset.html) and [share a dataset](https://huggingface.co/docs/datasets/share_dataset.html) in the documentation.
The "add a dataset link" gives 404 Error, and the share_dataset link has changed. I feel this information is redundant/deprecated now since we have a more detailed guide for "How to add a dataset?". | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3852/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3852/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5653 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5653/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5653/comments | https://api.github.com/repos/huggingface/datasets/issues/5653/events | https://github.com/huggingface/datasets/issues/5653 | 1,633,254,159 | I_kwDODunzps5hWXsP | 5,653 | Doc: save_to_disk, `num_proc` will affect `num_shards`, but it's not documented | {
"avatar_url": "https://avatars.githubusercontent.com/u/42400165?v=4",
"events_url": "https://api.github.com/users/RmZeta2718/events{/privacy}",
"followers_url": "https://api.github.com/users/RmZeta2718/followers",
"following_url": "https://api.github.com/users/RmZeta2718/following{/other_user}",
"gists_url": "https://api.github.com/users/RmZeta2718/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RmZeta2718",
"id": 42400165,
"login": "RmZeta2718",
"node_id": "MDQ6VXNlcjQyNDAwMTY1",
"organizations_url": "https://api.github.com/users/RmZeta2718/orgs",
"received_events_url": "https://api.github.com/users/RmZeta2718/received_events",
"repos_url": "https://api.github.com/users/RmZeta2718/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RmZeta2718/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RmZeta2718/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RmZeta2718"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | [] | null | [
"I agree this should be documented"
] | "2023-03-21T05:25:35Z" | "2023-03-24T16:36:23Z" | "2023-03-24T16:36:23Z" | NONE | null | null | null | ### Describe the bug
[`num_proc`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_proc) will affect `num_shards`, but it's not documented
### Steps to reproduce the bug
Nothing to reproduce
### Expected behavior
[document of `num_shards`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_shards) explicitly says that it depends on `max_shard_size`, it should also mention `num_proc`.
### Environment info
datasets main document | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5653/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5653/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4835/comments | https://api.github.com/repos/huggingface/datasets/issues/4835/events | https://github.com/huggingface/datasets/pull/4835 | 1,336,994,835 | PR_kwDODunzps49FJg9 | 4,835 | Fix documentation card of ethos dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-08-12T09:51:06Z" | "2022-08-12T13:13:55Z" | "2022-08-12T12:59:39Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4835.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4835",
"merged_at": "2022-08-12T12:59:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4835.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4835"
} | Fix documentation card of ethos dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4835/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4835/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5429/comments | https://api.github.com/repos/huggingface/datasets/issues/5429/events | https://github.com/huggingface/datasets/pull/5429 | 1,535,192,687 | PR_kwDODunzps5HeuyT | 5,429 | Fix CI by temporarily pinning apache-beam < 2.44.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2023-01-16T16:20:09Z" | "2023-01-16T16:51:42Z" | "2023-01-16T16:49:03Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5429.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5429",
"merged_at": "2023-01-16T16:49:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5429.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5429"
} | Temporarily pin apache-beam < 2.44.0
Fix #5426. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5429/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5429/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4185/comments | https://api.github.com/repos/huggingface/datasets/issues/4185/events | https://github.com/huggingface/datasets/issues/4185 | 1,209,429,743 | I_kwDODunzps5IFm7v | 4,185 | Librispeech documentation, clarification on format | {
"avatar_url": "https://avatars.githubusercontent.com/u/59132?v=4",
"events_url": "https://api.github.com/users/albertz/events{/privacy}",
"followers_url": "https://api.github.com/users/albertz/followers",
"following_url": "https://api.github.com/users/albertz/following{/other_user}",
"gists_url": "https://api.github.com/users/albertz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertz",
"id": 59132,
"login": "albertz",
"node_id": "MDQ6VXNlcjU5MTMy",
"organizations_url": "https://api.github.com/users/albertz/orgs",
"received_events_url": "https://api.github.com/users/albertz/received_events",
"repos_url": "https://api.github.com/users/albertz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertz"
} | [] | open | false | null | [] | null | [
"(@patrickvonplaten )",
"Also cc @lhoestq here",
"The documentation in the code is definitely outdated - thanks for letting me know, I'll remove it in https://github.com/huggingface/datasets/pull/4184 .\r\n\r\nYou're exactly right `audio` `array` already decodes the audio file to the correct waveform. This is done on the fly, which is also why one should **not** do `ds[\"audio\"][\"array\"][0]` as this will decode all dataset samples, but instead `ds[0][\"audio\"][\"array\"]` see: https://huggingface.co/docs/datasets/audio_process#audio-datasets\r\n\r\n",
"So, again to clarify: On disk, only the raw flac file content is stored? Is this also the case after `save_to_disk`?\r\n\r\nAnd is it simple to also store it re-encoded as ogg or mp3 instead?\r\n",
"Hey, \r\n\r\nSorry yeah I was just about to look into this! We actually had an outdated version of Librispeech ASR that didn't save any files, but instead converted the audio files to a byte string, then was then decoded on-the-fly. This however is not very user-friendly so we recently decided to instead show the full path of the audio files with the `path` parameter.\r\n\r\nI'm currently changing this for Librispeech here: https://github.com/huggingface/datasets/pull/4184 .\r\nYou should be able to see the audio file in the original `flac` format under `path` then. I don't think it's a good idea to convert to MP3 out-of-the-box, but we could maybe think about some kind of convert function for audio datasets cc @lhoestq ? ",
"> I don't think it's a good idea to convert to MP3 out-of-the-box, but we could maybe think about some kind of convert function for audio datasets cc @lhoestq ?\r\n\r\nSure, I would expect that `load_dataset(\"librispeech_asr\")` would give you the original (not re-encoded) data (flac or already decoded). So such re-encoding logic would be some separate generic function. So I could do sth like `dataset.reencode_as_ogg(**ogg_encode_opts).save_to_disk(...)` or so.\r\n",
"A follow-up question: I wonder whether a Parquet dataset is maybe more what we actually want to have? (Following also my comment here: https://github.com/huggingface/datasets/pull/4184#issuecomment-1105045491.) Because I think we actually would prefer to embed the data content in the dataset.\r\n\r\nSo, instead of `save_to_disk`/`load_from_disk`, we would use `to_parquet`,`from_parquet`? Is there any downside? Are arrow files more efficient?\r\n\r\nRelated is also the doc update in #4193.\r\n",
"`save_to_disk` saves the dataset as an Arrow file, which is the format we use to load a dataset using memory mapping. This way the dataset does not fill your RAM, but is read from your disk instead.\r\n\r\nTherefore you can directly reload a dataset saved with `save_to_disk` using `load_from_disk`.\r\n\r\nParquet files are used for cold storage: to use memory mapping on a Parquet dataset, you first have to convert it to Arrow. We use Parquet to reduce the I/O when pushing/downloading data from the Hugging face Hub. When you load a Parquet file from the Hub, it is converted to Arrow on the fly during the download."
] | "2022-04-20T09:35:55Z" | "2022-04-21T11:00:53Z" | null | NONE | null | null | null | https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53
> Note that in order to limit the required storage for preparing this dataset, the audio
> is stored in the .flac format and is not converted to a float32 array. To convert, the audio
> file to a float32 array, please make use of the `.map()` function as follows:
>
> ```python
> import soundfile as sf
> def map_to_array(batch):
> speech_array, _ = sf.read(batch["file"])
> batch["speech"] = speech_array
> return batch
> dataset = dataset.map(map_to_array, remove_columns=["file"])
> ```
Is this still true?
In my case, `ds["train.100"]` returns:
```
Dataset({
features: ['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'],
num_rows: 28539
})
```
and taking the first instance yields:
```
{'file': '374-180298-0000.flac',
'audio': {'path': '374-180298-0000.flac',
'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ...,
-2.74658203e-04, -1.83105469e-04, -3.05175781e-05]),
'sampling_rate': 16000},
'text': 'CHAPTER SIXTEEN I MIGHT HAVE TOLD YOU OF THE BEGINNING OF THIS LIAISON IN A FEW LINES BUT I WANTED YOU TO SEE EVERY STEP BY WHICH WE CAME I TO AGREE TO WHATEVER MARGUERITE WISHED',
'speaker_id': 374,
'chapter_id': 180298,
'id': '374-180298-0000'}
```
The `audio` `array` seems to be already decoded. So such convert/decode code as mentioned in the doc is wrong?
But I wonder, is it actually stored as flac on disk, and the decoding is done on-the-fly? Or was it decoded already during the preparation and is stored as raw samples on disk?
Note that I also used `datasets.load_dataset("librispeech_asr", "clean").save_to_disk(...)` and then `datasets.load_from_disk(...)` in this example. Does this change anything on how it is stored on disk?
A small related question: Actually I would prefer to even store it as mp3 or ogg on disk. Is this easy to convert? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4185/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4185/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5431/comments | https://api.github.com/repos/huggingface/datasets/issues/5431/events | https://github.com/huggingface/datasets/issues/5431 | 1,535,862,621 | I_kwDODunzps5bi2dd | 5,431 | CI benchmarks are broken: Unknown arguments: runnerPath, path | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | "2023-01-17T06:49:57Z" | "2023-01-18T06:33:24Z" | "2023-01-17T08:51:18Z" | MEMBER | null | null | null | Our CI benchmarks are broken, raising `Unknown arguments` error: https://github.com/huggingface/datasets/actions/runs/3932397079/jobs/6724905161
```
Unknown arguments: runnerPath, path
```
Stack trace:
```
100%|██████████| 500/500 [00:01<00:00, 338.98ba/s]
Updating lock file 'dvc.lock'
To track the changes with git, run:
git add dvc.lock
To enable auto staging, run:
dvc config core.autostage true
Use `dvc push` to send your updates to remote storage.
cml send-comment <markdown file>
Global Options:
--log Logging verbosity
[string] [choices: "error", "warn", "info", "debug"] [default: "info"]
--driver Git provider where the repository is hosted
[string] [choices: "github", "gitlab", "bitbucket"] [default: infer from the
environment]
--repo Repository URL or slug
[string] [default: infer from the environment]
--driver-token, --token CI driver personal/project access token (PAT)
[string] [default: infer from the environment]
--help Show help [boolean]
Options:
--target Comment type (`commit`, `pr`, `commit/f00bar`,
`pr/42`, `issue/1337`),default is automatic (`pr`
but fallback to `commit`). [string]
--watch Watch for changes and automatically update the
comment [boolean]
--publish Upload any local images found in the Markdown
report [boolean] [default: true]
--publish-url Self-hosted image server URL
[string] [default: "https://asset.cml.dev/"]
--publish-native, --native Uses driver's native capabilities to upload assets
instead of CML's storage; not available on GitHub
[boolean]
--watermark-title Hidden comment marker (used for targeting in
subsequent `cml comment update`); "{workflow}" &
"{run}" are auto-replaced [string] [default: ""]
Unknown arguments: runnerPath, path
Error: Process completed with exit code 1.
```
Issue reported to iterative/cml:
- iterative/cml#1319 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5431/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5431/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6298 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6298/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6298/comments | https://api.github.com/repos/huggingface/datasets/issues/6298/events | https://github.com/huggingface/datasets/pull/6298 | 1,938,797,389 | PR_kwDODunzps5ckg6j | 6,298 | Doc readme improvements | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006761 / 0.011353 (-0.004592) | 0.004307 / 0.011008 (-0.006701) | 0.084682 / 0.038508 (0.046174) | 0.083994 / 0.023109 (0.060885) | 0.316612 / 0.275898 (0.040714) | 0.346157 / 0.323480 (0.022678) | 0.004490 / 0.007986 (-0.003495) | 0.003699 / 0.004328 (-0.000629) | 0.066144 / 0.004250 (0.061894) | 0.057958 / 0.037052 (0.020906) | 0.319018 / 0.258489 (0.060529) | 0.367597 / 0.293841 (0.073756) | 0.031146 / 0.128546 (-0.097401) | 0.008814 / 0.075646 (-0.066832) | 0.290971 / 0.419271 (-0.128301) | 0.052769 / 0.043533 (0.009236) | 0.313125 / 0.255139 (0.057986) | 0.330473 / 0.283200 (0.047273) | 0.025922 / 0.141683 (-0.115760) | 1.494989 / 1.452155 (0.042834) | 1.556140 / 1.492716 (0.063423) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.310580 / 0.018006 (0.292574) | 0.563600 / 0.000490 (0.563110) | 0.012344 / 0.000200 (0.012144) | 0.000382 / 0.000054 (0.000328) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031468 / 0.037411 (-0.005943) | 0.084856 / 0.014526 (0.070331) | 0.101371 / 0.176557 (-0.075186) | 0.158735 / 0.737135 (-0.578400) | 0.102451 / 0.296338 (-0.193888) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402288 / 0.215209 (0.187079) | 4.001351 / 2.077655 (1.923696) | 2.022710 / 1.504120 (0.518590) | 1.850236 / 1.541195 (0.309041) | 1.946779 / 1.468490 (0.478289) | 0.485828 / 4.584777 (-4.098949) | 3.584925 / 3.745712 (-0.160787) | 3.400815 / 5.269862 (-1.869046) | 2.123187 / 4.565676 (-2.442490) | 0.057373 / 0.424275 (-0.366902) | 0.007383 / 0.007607 (-0.000224) | 0.479773 / 0.226044 (0.253729) | 4.805342 / 2.268929 (2.536414) | 2.530151 / 55.444624 (-52.914473) | 2.190136 / 6.876477 (-4.686341) | 2.463666 / 2.142072 (0.321593) | 0.583512 / 4.805227 (-4.221715) | 0.134205 / 6.500664 (-6.366459) | 0.062021 / 0.075469 (-0.013448) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.239532 / 1.841788 (-0.602255) | 20.252941 / 8.074308 (12.178633) | 14.265697 / 10.191392 (4.074305) | 0.158745 / 0.680424 (-0.521679) | 0.018605 / 0.534201 (-0.515596) | 0.394246 / 0.579283 (-0.185037) | 0.415260 / 0.434364 (-0.019104) | 0.462636 / 0.540337 (-0.077701) | 0.645318 / 1.386936 (-0.741618) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007063 / 0.011353 (-0.004290) | 0.004388 / 0.011008 (-0.006621) | 0.064997 / 0.038508 (0.026489) | 0.085135 / 0.023109 (0.062026) | 0.424349 / 0.275898 (0.148451) | 0.456033 / 0.323480 (0.132553) | 0.005745 / 0.007986 (-0.002241) | 0.003705 / 0.004328 (-0.000624) | 0.065835 / 0.004250 (0.061585) | 0.058366 / 0.037052 (0.021314) | 0.421654 / 0.258489 (0.163165) | 0.460334 / 0.293841 (0.166493) | 0.032828 / 0.128546 (-0.095718) | 0.008974 / 0.075646 (-0.066673) | 0.072524 / 0.419271 (-0.346747) | 0.048558 / 0.043533 (0.005025) | 0.413546 / 0.255139 (0.158407) | 0.435765 / 0.283200 (0.152565) | 0.023754 / 0.141683 (-0.117929) | 1.476884 / 1.452155 (0.024730) | 1.560011 / 1.492716 (0.067294) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.318279 / 0.018006 (0.300272) | 0.544990 / 0.000490 (0.544501) | 0.007118 / 0.000200 (0.006918) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033352 / 0.037411 (-0.004059) | 0.092921 / 0.014526 (0.078395) | 0.109028 / 0.176557 (-0.067528) | 0.161433 / 0.737135 (-0.575703) | 0.108445 / 0.296338 (-0.187893) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438925 / 0.215209 (0.223716) | 4.400714 / 2.077655 (2.323059) | 2.403727 / 1.504120 (0.899607) | 2.236472 / 1.541195 (0.695277) | 2.319219 / 1.468490 (0.850729) | 0.490159 / 4.584777 (-4.094618) | 3.647474 / 3.745712 (-0.098238) | 3.433144 / 5.269862 (-1.836718) | 2.145367 / 4.565676 (-2.420310) | 0.057994 / 0.424275 (-0.366281) | 0.007452 / 0.007607 (-0.000155) | 0.513808 / 0.226044 (0.287763) | 5.130792 / 2.268929 (2.861863) | 2.861691 / 55.444624 (-52.582934) | 2.473292 / 6.876477 (-4.403185) | 2.756445 / 2.142072 (0.614372) | 0.586783 / 4.805227 (-4.218444) | 0.134170 / 6.500664 (-6.366494) | 0.061149 / 0.075469 (-0.014320) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.350144 / 1.841788 (-0.491644) | 21.003528 / 8.074308 (12.929220) | 15.174314 / 10.191392 (4.982922) | 0.186535 / 0.680424 (-0.493888) | 0.020821 / 0.534201 (-0.513380) | 0.399210 / 0.579283 (-0.180073) | 0.431942 / 0.434364 (-0.002422) | 0.475395 / 0.540337 (-0.064942) | 0.677457 / 1.386936 (-0.709479) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6aa5fc278324a253eab43ad1bc048e822e4ae5c7 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007062 / 0.011353 (-0.004291) | 0.004299 / 0.011008 (-0.006710) | 0.086019 / 0.038508 (0.047511) | 0.085166 / 0.023109 (0.062057) | 0.355804 / 0.275898 (0.079906) | 0.381056 / 0.323480 (0.057577) | 0.005500 / 0.007986 (-0.002486) | 0.003496 / 0.004328 (-0.000833) | 0.064866 / 0.004250 (0.060615) | 0.057399 / 0.037052 (0.020346) | 0.357914 / 0.258489 (0.099425) | 0.395387 / 0.293841 (0.101546) | 0.031763 / 0.128546 (-0.096784) | 0.008665 / 0.075646 (-0.066981) | 0.290097 / 0.419271 (-0.129175) | 0.053297 / 0.043533 (0.009765) | 0.355659 / 0.255139 (0.100520) | 0.378232 / 0.283200 (0.095032) | 0.026015 / 0.141683 (-0.115668) | 1.437121 / 1.452155 (-0.015034) | 1.538798 / 1.492716 (0.046082) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243518 / 0.018006 (0.225511) | 0.461361 / 0.000490 (0.460871) | 0.009529 / 0.000200 (0.009329) | 0.000473 / 0.000054 (0.000419) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030379 / 0.037411 (-0.007032) | 0.089851 / 0.014526 (0.075325) | 0.098278 / 0.176557 (-0.078278) | 0.157077 / 0.737135 (-0.580058) | 0.098997 / 0.296338 (-0.197341) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382415 / 0.215209 (0.167206) | 3.801964 / 2.077655 (1.724309) | 1.887680 / 1.504120 (0.383560) | 1.775903 / 1.541195 (0.234709) | 1.851338 / 1.468490 (0.382848) | 0.483616 / 4.584777 (-4.101161) | 3.612977 / 3.745712 (-0.132736) | 3.397700 / 5.269862 (-1.872162) | 2.114572 / 4.565676 (-2.451105) | 0.057250 / 0.424275 (-0.367025) | 0.007362 / 0.007607 (-0.000245) | 0.456873 / 0.226044 (0.230829) | 4.567319 / 2.268929 (2.298391) | 2.399476 / 55.444624 (-53.045148) | 2.054542 / 6.876477 (-4.821935) | 2.343432 / 2.142072 (0.201359) | 0.582319 / 4.805227 (-4.222908) | 0.134045 / 6.500664 (-6.366619) | 0.062726 / 0.075469 (-0.012743) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.283390 / 1.841788 (-0.558398) | 20.358511 / 8.074308 (12.284202) | 14.933989 / 10.191392 (4.742597) | 0.164960 / 0.680424 (-0.515464) | 0.018625 / 0.534201 (-0.515576) | 0.394087 / 0.579283 (-0.185196) | 0.416761 / 0.434364 (-0.017603) | 0.466669 / 0.540337 (-0.073669) | 0.643161 / 1.386936 (-0.743775) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007141 / 0.011353 (-0.004212) | 0.004185 / 0.011008 (-0.006824) | 0.066097 / 0.038508 (0.027588) | 0.088436 / 0.023109 (0.065327) | 0.401189 / 0.275898 (0.125291) | 0.440402 / 0.323480 (0.116922) | 0.005729 / 0.007986 (-0.002257) | 0.003527 / 0.004328 (-0.000801) | 0.065278 / 0.004250 (0.061027) | 0.060866 / 0.037052 (0.023813) | 0.407035 / 0.258489 (0.148546) | 0.443923 / 0.293841 (0.150083) | 0.032922 / 0.128546 (-0.095625) | 0.008739 / 0.075646 (-0.066907) | 0.071800 / 0.419271 (-0.347472) | 0.048994 / 0.043533 (0.005461) | 0.403736 / 0.255139 (0.148597) | 0.419566 / 0.283200 (0.136366) | 0.025369 / 0.141683 (-0.116314) | 1.474980 / 1.452155 (0.022825) | 1.553500 / 1.492716 (0.060784) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225224 / 0.018006 (0.207218) | 0.462891 / 0.000490 (0.462401) | 0.006958 / 0.000200 (0.006758) | 0.000163 / 0.000054 (0.000108) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034431 / 0.037411 (-0.002980) | 0.100021 / 0.014526 (0.085495) | 0.108339 / 0.176557 (-0.068217) | 0.162762 / 0.737135 (-0.574374) | 0.108951 / 0.296338 (-0.187388) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.435966 / 0.215209 (0.220757) | 4.351744 / 2.077655 (2.274089) | 2.372307 / 1.504120 (0.868187) | 2.192146 / 1.541195 (0.650951) | 2.326839 / 1.468490 (0.858349) | 0.488292 / 4.584777 (-4.096485) | 3.745227 / 3.745712 (-0.000485) | 3.456306 / 5.269862 (-1.813556) | 2.159771 / 4.565676 (-2.405906) | 0.057953 / 0.424275 (-0.366322) | 0.007469 / 0.007607 (-0.000138) | 0.515116 / 0.226044 (0.289071) | 5.162871 / 2.268929 (2.893942) | 2.850336 / 55.444624 (-52.594288) | 2.514700 / 6.876477 (-4.361777) | 2.748843 / 2.142072 (0.606770) | 0.587687 / 4.805227 (-4.217540) | 0.134333 / 6.500664 (-6.366331) | 0.062097 / 0.075469 (-0.013372) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.377082 / 1.841788 (-0.464705) | 21.103127 / 8.074308 (13.028819) | 15.325275 / 10.191392 (5.133883) | 0.166225 / 0.680424 (-0.514199) | 0.020472 / 0.534201 (-0.513729) | 0.395866 / 0.579283 (-0.183417) | 0.444964 / 0.434364 (0.010600) | 0.475367 / 0.540337 (-0.064970) | 0.693325 / 1.386936 (-0.693611) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#79b5bbbd52ffd90dd958c05b333d7c90a03756cc \"CML watermark\")\n"
] | "2023-10-11T21:51:12Z" | "2023-10-12T12:47:15Z" | "2023-10-12T12:38:19Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6298.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6298",
"merged_at": "2023-10-12T12:38:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6298.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6298"
} | Changes in the doc READMe:
* adds two new sections (to be aligned with `transformers` and `hfh`): "Previewing the documentation" and "Writing documentation examples"
* replaces the mentions of `transformers` with `datasets`
* fixes some dead links | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6298/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6298/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4238 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4238/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4238/comments | https://api.github.com/repos/huggingface/datasets/issues/4238/events | https://github.com/huggingface/datasets/issues/4238 | 1,217,168,123 | I_kwDODunzps5IjIL7 | 4,238 | Dataset caching policy | {
"avatar_url": "https://avatars.githubusercontent.com/u/163333?v=4",
"events_url": "https://api.github.com/users/loretoparisi/events{/privacy}",
"followers_url": "https://api.github.com/users/loretoparisi/followers",
"following_url": "https://api.github.com/users/loretoparisi/following{/other_user}",
"gists_url": "https://api.github.com/users/loretoparisi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/loretoparisi",
"id": 163333,
"login": "loretoparisi",
"node_id": "MDQ6VXNlcjE2MzMzMw==",
"organizations_url": "https://api.github.com/users/loretoparisi/orgs",
"received_events_url": "https://api.github.com/users/loretoparisi/received_events",
"repos_url": "https://api.github.com/users/loretoparisi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/loretoparisi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loretoparisi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/loretoparisi"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi @loretoparisi, thanks for reporting.\r\n\r\nThere is an option to force the redownload of the data files (and thus not using previously download and cached data files): `load_dataset(..., download_mode=\"force_redownload\")`.\r\n\r\nPlease, let me know if this fixes your problem.\r\n\r\nI can confirm you that your dataset loads without any problem for me:\r\n```python\r\nIn [2]: ds = load_dataset(\"loretoparisi/tatoeba-sentences\", data_files={\"train\": \"train.csv\", \"test\": \"test.csv\"}, delimiter=\"\\t\", column_names=['label', 'text'])\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['label', 'text'],\r\n num_rows: 8256449\r\n })\r\n test: Dataset({\r\n features: ['label', 'text'],\r\n num_rows: 2061204\r\n })\r\n})\r\n``` ",
"@albertvillanova thank you, it seems it still does not work using:\r\n\r\n```python\r\nsentences = load_dataset(\r\n \"loretoparisi/tatoeba-sentences\",\r\n data_files=data_files,\r\n delimiter='\\t', \r\n column_names=['label', 'text'],\r\n download_mode=\"force_redownload\"\r\n)\r\n```\r\n[This](https://colab.research.google.com/drive/1EA6FWo5pHxU8rPHHRn24NlHqRPiOlPTr?usp=sharing) is my notebook!\r\n\r\nThe problem is that the download file's revision for `test.csv` is not correctly parsed\r\n\r\n![Schermata 2022-04-27 alle 18 09 41](https://user-images.githubusercontent.com/163333/165563507-0be53eb6-8f61-49b0-b959-306e59281de3.png)\r\n\r\nIf you download that file `test.csv` from the repo, the line `\\\\N` is not there anymore (it was there at the first file upload).\r\n\r\nMy impression is that the Apache Arrow file is still cached - so server side, despite of enabling a forced download. For what I can see I get those two arrow files, but I cannot grep the bad line (`\\\\N`) since are binary files:\r\n\r\n```\r\n!ls -l /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519\r\n!ls -l /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519/csv-test.arrow\r\n!head /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519/dataset_info.json\r\n```\r\n",
"SOLVED! The problem was the with the file itself, using caching parameter helped indeed.\r\nThanks for helping!"
] | "2022-04-27T10:42:11Z" | "2022-04-27T16:29:25Z" | "2022-04-27T16:28:50Z" | NONE | null | null | null | ## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values)
852 if value not in self._str2int:
853 value = str(value).strip()
--> 854 output.append(self._str2int[str(value)])
855 else:
856 # No names provided, try to integerize
KeyError: '\\N'
```
The file now is cleanup up, but I still get the error. This happens even if I inspect the local cached contents, and cleanup the files locally:
```python
from datasets import load_dataset_builder
dataset_builder = load_dataset_builder("loretoparisi/tatoeba-sentences")
print(dataset_builder.cache_dir)
print(dataset_builder.info.features)
print(dataset_builder.info.splits)
```
```
Using custom data configuration loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd
/root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519
None
None
```
and removing files located at `/root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-*`.
Is there any remote file caching policy in place? If so, is it possibile to programmatically disable it?
Currently it seems that the file `test.csv` on the repo [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences/blob/main/test.csv) is cached remotely. In fact I download locally the file from raw link, the file is up-to-date; but If I use it within `datasets` as shown above, it gives to me always the first revision of the file, not the last.
Thank you.
## Steps to reproduce the bug
```python
from datasets import load_dataset,Features,Value,ClassLabel
class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"]
features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')})
num_labels = features['label'].num_classes
data_files = { "train": "train.csv", "test": "test.csv" }
sentences = load_dataset(
"loretoparisi/tatoeba-sentences",
data_files=data_files,
delimiter='\t',
column_names=['label', 'text'],
)
# You can make this part faster with num_proc=<some int>
sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features)
sentences = sentences.shuffle()
```
## Expected results
Properly tokenize dataset file `test.csv` without issues.
## Actual results
Specify the actual results or traceback.
```
Downloading data files: 100%
2/2 [00:16<00:00, 7.34s/it]
Downloading data: 100%
391M/391M [00:12<00:00, 36.6MB/s]
Downloading data: 100%
92.4M/92.4M [00:02<00:00, 40.0MB/s]
Extracting data files: 100%
2/2 [00:00<00:00, 47.66it/s]
Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data.
100%
2/2 [00:00<00:00, 25.94it/s]
11%
942339/8256449 [01:55<13:11, 9245.85ex/s]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-3-6a9867fad8d6>](https://localhost:8080/#) in <module>()
12 )
13 # You can make this part faster with num_proc=<some int>
---> 14 sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features)
15 sentences = sentences.shuffle()
10 frames
[/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values)
852 if value not in self._str2int:
853 value = str(value).strip()
--> 854 output.append(self._str2int[str(value)])
855 else:
856 # No names provided, try to integerize
KeyError: '\\N'
```
## Environment info
```
- `datasets` version: 2.1.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
- ```
```
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
- ```
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4238/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4238/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1682 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1682/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1682/comments | https://api.github.com/repos/huggingface/datasets/issues/1682/events | https://github.com/huggingface/datasets/pull/1682 | 778,268,156 | MDExOlB1bGxSZXF1ZXN0NTQ4Mzg1NTk1 | 1,682 | Don't use xlrd for xlsx files | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-01-04T18:11:50Z" | "2021-01-04T18:13:14Z" | "2021-01-04T18:13:13Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1682.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1682",
"merged_at": "2021-01-04T18:13:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1682.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1682"
} | Since the latest release of `xlrd` (2.0), the support for xlsx files stopped.
Therefore we needed to use something else.
A good alternative is `openpyxl` which has also an integration with pandas si we can still call `pd.read_excel`.
I left the unused import of `openpyxl` in the dataset scripts to show users that this is a required dependency to use the scripts.
I tested the different datasets using `datasets-cli test` and the tests are successful (no missing examples). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1682/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1682/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1870 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1870/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1870/comments | https://api.github.com/repos/huggingface/datasets/issues/1870/events | https://github.com/huggingface/datasets/pull/1870 | 807,306,564 | MDExOlB1bGxSZXF1ZXN0NTcyNTc4Mjc4 | 1,870 | Implement Dataset add_item | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | {
"closed_at": "2021-05-31T16:20:53Z",
"closed_issues": 3,
"created_at": "2021-04-09T13:16:31Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-05-14T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/3",
"id": 6644287,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==",
"number": 3,
"open_issues": 0,
"state": "closed",
"title": "1.7",
"updated_at": "2021-05-31T16:20:53Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/3"
} | [
"Thanks @lhoestq for your remarks. Yes, I agree there are still many issues to be tackled... This PR is just a starting point, so that we can discuss how Dataset should be generalized.",
"Sure ! I opened an issue #1877 so we can discuss this specific aspect :)",
"I am going to implement this consolidation step in #2151.",
"Sounds good !",
"I retake this PR once the consolidation step is already implemented by #2151."
] | "2021-02-12T15:03:46Z" | "2021-04-23T10:01:31Z" | "2021-04-23T10:01:31Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1870.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1870",
"merged_at": "2021-04-23T10:01:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1870.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1870"
} | Implement `Dataset.add_item`.
Close #1854. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1870/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1870/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3314 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3314/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3314/comments | https://api.github.com/repos/huggingface/datasets/issues/3314/events | https://github.com/huggingface/datasets/pull/3314 | 1,061,448,227 | PR_kwDODunzps4u6mdX | 3,314 | Adding arg to pass process rank to `map` | {
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TevenLeScao",
"id": 26709476,
"login": "TevenLeScao",
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TevenLeScao"
} | [] | closed | false | null | [] | null | [
"Some commits seem to be there twice (made the mistake of rebasing because I wasn't sure whether the doc had changed), is this an issue @lhoestq ?"
] | "2021-11-23T15:55:21Z" | "2021-11-24T11:54:13Z" | "2021-11-24T11:54:13Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3314.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3314",
"merged_at": "2021-11-24T11:54:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3314.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3314"
} | This PR adds a `with_rank` argument to `map` that gives the user the possibility to pass the rank of each process to their function. This is mostly designed for multi-GPU map (each process can be sent to a different device thanks to the rank). I've also added tests. I'm putting the PR up so you can check the code, I'll add a multi-GPU example to the doc (+ write a bit in the doc for the new arg) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3314/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3314/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4203/comments | https://api.github.com/repos/huggingface/datasets/issues/4203/events | https://github.com/huggingface/datasets/pull/4203 | 1,212,431,067 | PR_kwDODunzps42oNrS | 4,203 | Add Precision Metric Card | {
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emibaylor",
"id": 27527747,
"login": "emibaylor",
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emibaylor"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-04-22T14:23:48Z" | "2022-05-03T14:23:40Z" | "2022-05-03T14:16:46Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4203.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4203",
"merged_at": "2022-05-03T14:16:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4203.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4203"
} | What this PR mainly does:
- add metric card for precision metric
- update docs in precision python file
Note: I've also included a .json file with all of the metric card information. I've started compiling the relevant information in this type of .json files, and then using a script I wrote to generate the formatted metric card, as well as the docs to go in the .py file. I figured I'd upload the .json because it could be useful, especially if I also make a PR with the script I'm using (let me know if that's something you think would be beneficial!) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4203/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4203/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2395/comments | https://api.github.com/repos/huggingface/datasets/issues/2395/events | https://github.com/huggingface/datasets/pull/2395 | 898,762,730 | MDExOlB1bGxSZXF1ZXN0NjUwNTk3NjI0 | 2,395 | `pretty_name` for dataset in YAML tags | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"Initially I removed the ` - ` since there was only one `pretty_name` per config but turns out it was breaking here in `from_yaml_string`https://github.com/huggingface/datasets/blob/74751e3f98c74d22c48c6beb1fab0c13b5dfd075/src/datasets/utils/metadata.py#L197 in `/utils/metadata.py`",
"@lhoestq I guess this will also need some validation?",
"Looks like the parser doesn't allow things like\r\n```\r\npretty_name:\r\n config_name1: My awesome config number 1\r\n config_name2: My amazing config number 2\r\n```\r\ntherefore you had to use `-` and consider them as a list.\r\n\r\nI would be nice to add support for this case in the validator.\r\n\r\nThere's one thing though: the DatasetMetadata object currently corresponds to the yaml tags that are flattened: the config names are just ignored, and the lists are concatenated.\r\n\r\nTherefore I think we would potentially need to instantiate several `DatasetMetadata` objects: one per config. Otherwise we would end up with a list of several pretty_name while we actually need at most 1 per config.\r\n\r\nWhat do you think @gchhablani ?",
"I was thinking of returning `metada_dict` (on line 193) whenever `load_dataset_card` is called (we can pass an extra parameter to `from_readme` or `from_yaml_string` for that to happen).\r\n\r\nOne just needs config_name as key for the dictionary inside `pretty_name` dict and for single config, there would be only one value to print. We can do this for other fields as well like `size_categories`, `languages` etc. This will obviate the need to flatten the YAML tags so that don't have to instantiate several DatasetMetadata objects. What do you guys think @lhoestq @gchhablani? \r\n\r\nUpdate: I was thinking of returning the whole dictionary before flattening so that user can access whatever they want with specific configs. Let's say [this](https://pastebin.com/eJ84314f) is my `metadata_dict` before flattening (the loaded YAML string), so instead of validating it and then returning the items individually we can return it just after loading the YAML string.",
"Hi @lhoestq @bhavitvyamalik \r\n\r\n@bhavitvyamalik, I'm not sure I understand your approach, can you please elaborate? The `metadata_dict` is flattened before instantiating the object, do you want to remove that? Still confused.\r\n\r\nFew things come to my mind after going through this PR. They might not be entirely relevant to the current task, but I'm just trying to think about possible cases and discuss them here.\r\n\r\n1. Instead of instantiating a new `DatasetMetadata` for each config with flattened tags, why can't we make it more flexible and validate only non-dict items? However, in that case, the types wouldn't be as strict for the class attributes. It would also not work for cases that are like `Dict[str,List[Dict[str,str]]`, but I guess that won't be needed anyway in the foreseeable future?\r\n\r\n Ideally, it would be something like - Check the metadata tag type (root), do a DFS, and find the non-dict objects (leaves), and validate them. Is this an overkill to handle the problem?\r\n2. For single-config datasets, there can be slightly different validation for `pretty_names`, than for multi-config. The user shouldn't need to provide a config name for single-config datasets, wdyt @bhavitvyamalik @lhoestq? Either way, for multi-config, the validation can use the dictionary keys in the path to that leaf node to verify `pretty_names: ... (config)` as well. This will check whether the config name is same as the key (might be unnecessary but prevents typos, so less work for the reviewer(s)). For future, however, it might be beneficial to have something like this.\r\n3. Should we have a default config name for single-config datasets? People use any string they feel like. I've seen `plain_text`, `default` and the dataset name. I've used `image` for a few datasets myself AFAIR. For smarter validation (again, a future case ;-;), it'd be easier for us to have a simple rule for naming configs in single-config datasets. Wdyt @lhoestq?",
"Btw, `pretty_names` can also be used to handle this during validation :P \r\n\r\n```\r\n-# Dataset Card for [Dataset Name]\r\n+# Dataset Card for Allegro Reviews\r\n```\r\n\r\nThis is where `DatasetMetadata` and `ReadMe` should be combined. But there are very few overlaps, I guess.\r\n\r\n\n@bhavitvyamalik @lhoestq What about adding a pretty name across all configs, and then config-specific names?\n\nLike\n\n```yaml\npretty_names:\n all_configs: X (dataset_name)\n config_1: X1 (config_1_name)\n config_2: X2 (config_2_name)\n```\nThen, using the `metadata_dict`, the ReadMe header can be validated against `X`.\n\nSorry if I'm throwing too many ideas at once.",
"@bhavitvyamalik\r\n\r\nNow, I think I better understand what you're saying. So you want to skip validation for the unflattened metadata and just return it? And let the validation run for the flattened version?",
"Exactly! Validation is important but once the YAML tags are validated I feel we shouldn't do that again while calling `load_dataset_card`. +1 for default config name for single-config datasets.",
"@bhavitvyamalik\r\nActually, I made the `ReadMe` validation similar to `DatasetMetadata` validation and the class was validating the metadata during the creation. \r\n\r\nMaybe we need to have a separate validation method instead of having it in `__post_init__`? Wdyt @lhoestq? \r\n\r\nI'm sensing too many things to look into. It'd be great to discuss these sometime. \r\n\r\nBut if this PR is urgent then @bhavitvyamalik's logic seems good to me. It doesn't need major modifications in validation.",
"> Maybe we need to have a separate validation method instead of having it in __post_init__? Wdyt @lhoestq?\r\n\r\nWe can definitely have a `is_valid()` method instead of doing it in the post init.\r\n\r\n> What about adding a pretty name across all configs, and then config-specific names?\r\n\r\nLet's keep things simple to starts with. If we can allow both single-config and multi-config cases it would already be great :)\r\n\r\nfor single-config:\r\n```yaml\r\npretty_name: Allegro Reviews\r\n```\r\n\r\nfor multi-config:\r\n```yaml\r\npretty_name:\r\n mrpc: Microsoft Research Paraphrase Corpus (MRPC)\r\n sst2: Stanford Sentiment Treebank\r\n ...\r\n```\r\n\r\nTo support the multi-config case I see two options:\r\n1. Don't allow DatasetMetadata to have dictionaries but instead have separate DatasetMetadata objects per config\r\n2. allow DatasetMetadata to have dictionaries. It implies to remove the flattening step. Then we could get metadata for a specific config this way for example:\r\n```python\r\nfrom datasets import load_dataset_card\r\n\r\nglue_dataset_card = load_dataset_card(\"glue\")\r\nprint(glue_dataset_card.metadata)\r\n# DatasetMetatada object with dictionaries since there are many configs\r\nprint(glue_dataset_card.metadata.get_metadata_for_config(\"mrpc\"))\r\n# DatasetMetatada object with no dictionaries since there are only the mrpc tags\r\n```\r\n\r\nLet me know what you think or if you have other ideas.",
"I think Option 2 is better.\n\nJust to clarify, will `get_metadata_for_config` also return common details, like language, say?",
"> Just to clarify, will get_metadata_for_config also return common details, like language, say?\r\n\r\nYes that would be more convenient IMO. For example a dataset card like this\r\n```yaml\r\nlanguages:\r\n- en\r\npretty_name:\r\n config1: Pretty Name for Config 1\r\n config3: Pretty Name for Config 2\r\n```\r\n\r\nthen `metadat.get_metadata_for_config(\"config1\")` would return something like\r\n```python\r\nDatasetMetadata(languages=[\"en\"], pretty_name=\"Pretty Name for Config 1\")\r\n```",
"@lhoestq, should we do this post-processing in `load_dataset_card` by returning unflattened dictionary from `DatasetMetadata` or send this from `DatasetMetadata`? Since there isn't much to do I feel once we have the unflattened dictionary",
"Not sure I understand the difference @bhavitvyamalik , could you elaborate please ?",
"I was talking about this unflattened dictionary:\r\n\r\n> I was thinking of returning the whole dictionary before flattening so that user can access whatever they want with specific configs. Let's say [this](https://pastebin.com/eJ84314f) is my metadata_dict before flattening (the loaded YAML string), so instead of validating it and then returning the items individually we can return it just after loading the YAML string.\r\n\r\nPost-processing meant extracting config-specific fields from this dictionary and then return this `languages=[\"en\"], pretty_name=\"Pretty Name for Config 1\"`",
"I still don't understand what you mean by \"returning unflattened dictionary from DatasetMetadata or send this from DatasetMetadata\", sorry. Can you give an example or rephrase this ?\r\n\r\nIMO load_dataset_card can return a dataset card object with a metadata field. If the metadata isn't flat (i.e. it has several configs), you can get the flat metadata of 1 specific config with `get_metadata_for_config`. But of course if you have better ideas or suggestions, we can discuss this",
"@lhoestq, I think he is saying whatever `get_metadata_for_config` is doing can be done in `load_dataset_card` by taking the unflattened `metadata_dict` as input.\r\n\r\n@bhavitvyamalik, I think it'd be better to have this \"post-processing\" in `DatasetMetadata` instead of `load_dataset_card`, as @lhoestq has shown. I'll quickly get on that.\r\n\r\n---\r\nThree things that are to be changed in `DatasetMetadata`:\r\n1. Allow Non-flat elements and their validation.\r\n2. Create a method to get metadata by config name.\r\n3. Create a `validate()` method.\r\n\r\nOnce that is done, this PR can be updated and reviewed, wdys?",
"Thanks @gchhablani for the help ! Now that https://github.com/huggingface/datasets/pull/2436 is merged you can remove the `-` in the pretty_name @bhavitvyamalik :)",
"Thanks @bhavitvyamalik.\r\n\r\nI think this PR was superseded by these others also made by you:\r\n- #3498\r\n- #3536\r\n\r\nI'm closing this."
] | "2021-05-22T09:24:45Z" | "2022-09-23T13:29:14Z" | "2022-09-23T13:29:13Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2395.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2395",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2395.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2395"
} | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2395/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2395/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6051/comments | https://api.github.com/repos/huggingface/datasets/issues/6051/events | https://github.com/huggingface/datasets/issues/6051 | 1,811,549,650 | I_kwDODunzps5r-g3S | 6,051 | Skipping shard in the remote repo and resume upload | {
"avatar_url": "https://avatars.githubusercontent.com/u/9029817?v=4",
"events_url": "https://api.github.com/users/rs9000/events{/privacy}",
"followers_url": "https://api.github.com/users/rs9000/followers",
"following_url": "https://api.github.com/users/rs9000/following{/other_user}",
"gists_url": "https://api.github.com/users/rs9000/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rs9000",
"id": 9029817,
"login": "rs9000",
"node_id": "MDQ6VXNlcjkwMjk4MTc=",
"organizations_url": "https://api.github.com/users/rs9000/orgs",
"received_events_url": "https://api.github.com/users/rs9000/received_events",
"repos_url": "https://api.github.com/users/rs9000/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rs9000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rs9000/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rs9000"
} | [] | closed | false | null | [] | null | [
"Hi! `_select_contiguous` fetches a (zero-copy) slice of the dataset's Arrow table to build a shard, so I don't think this part is the problem. To me, the issue seems to be the step where we embed external image files' bytes (a lot of file reads). You can use `.map` with multiprocessing to perform this step before `push_to_hub` in a faster manner and cache it to disk:\r\n```python\r\nfrom datasets.table import embed_table_storage\r\n# load_dataset(...)\r\nformat = dataset.format\r\ndataset = dataset.with_format(\"arrow\")\r\ndataset = dataset.map(embed_table_storage, batched=True)\r\ndataset = dataset.with_format(**format)\r\n# push_to_hub(...)\r\n```\r\n\r\n(In Datasets 3.0, these external bytes will be written to an Arrow file when generating a dataset to avoid this \"embed\" step)",
"Hi, thanks, this solution saves some time.\r\nBut can't we avoid embedding all external image files bytes with each push, skipping the images that have already been pushed into the repo?\r\n\r\nEdit: Ok I missed the part of cache it manually on the disk the first time, this solves the problem. Thank you"
] | "2023-07-19T09:25:26Z" | "2023-07-20T18:16:01Z" | "2023-07-20T18:16:00Z" | NONE | null | null | null | ### Describe the bug
For some reason when I try to resume the upload of my dataset, it is very slow to reach the index of the shard from which to resume the uploading.
From my understanding, the problem is in this part of the code:
arrow_dataset.py
```python
for index, shard in logging.tqdm(
enumerate(itertools.chain([first_shard], shards_iter)),
desc="Pushing dataset shards to the dataset hub",
total=num_shards,
disable=not logging.is_progress_bar_enabled(),
):
shard_path_in_repo = path_in_repo(index, shard)
# Upload a shard only if it doesn't already exist in the repository
if shard_path_in_repo not in data_files:
```
In particular, iterating the generator is slow during the call:
```python
self._select_contiguous(start, length, new_fingerprint=new_fingerprint)
```
I wonder if it is possible to avoid calling this function for shards that are already uploaded and just start from the correct shard index.
### Steps to reproduce the bug
1. Start the upload
```python
dataset = load_dataset("imagefolder", data_dir=DATA_DIR, split="train", drop_labels=True)
dataset.push_to_hub("repo/name")
```
2. Stop and restart the upload after hundreds of shards
### Expected behavior
Skip the uploaded shards faster.
### Environment info
- `datasets` version: 2.5.1
- Platform: Linux-4.18.0-193.el8.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- PyArrow version: 12.0.1
- Pandas version: 2.0.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6051/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6051/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1206/comments | https://api.github.com/repos/huggingface/datasets/issues/1206/events | https://github.com/huggingface/datasets/pull/1206 | 757,952,992 | MDExOlB1bGxSZXF1ZXN0NTMzMjE2NDYw | 1,206 | Adding Enriched WebNLG dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TevenLeScao",
"id": 26709476,
"login": "TevenLeScao",
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TevenLeScao"
} | [] | closed | false | null | [] | null | [
"Nice :) \r\n\r\ncould you add the tags and also remove all the dummy data files that are not zipped ? The diff currently shows 800 files changes xD",
"Aaaaand it's rebase time - the new one is at #1264 !",
"closing this one since a new PR was created"
] | "2020-12-06T15:36:20Z" | "2023-09-24T09:51:43Z" | "2020-12-09T09:40:32Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1206.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1206",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1206.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1206"
} | This pull requests adds the `en` and `de` versions of the [Enriched WebNLG](https://github.com/ThiagoCF05/webnlg) dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1206/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1206/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2948/comments | https://api.github.com/repos/huggingface/datasets/issues/2948/events | https://github.com/huggingface/datasets/pull/2948 | 1,000,844,077 | PR_kwDODunzps4r9PdV | 2,948 | Fix minor URL format in scitldr dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2021-09-20T11:11:32Z" | "2021-09-20T13:18:28Z" | "2021-09-20T13:18:28Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2948.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2948",
"merged_at": "2021-09-20T13:18:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2948.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2948"
} | While investigating issue #2918, I found this minor format issues in the URLs (if runned in a Windows machine). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2948/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2948/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4600 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4600/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4600/comments | https://api.github.com/repos/huggingface/datasets/issues/4600/events | https://github.com/huggingface/datasets/pull/4600 | 1,289,177,042 | PR_kwDODunzps46l3P1 | 4,600 | Remove multiple config section | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-06-29T19:09:21Z" | "2022-07-04T17:41:20Z" | "2022-07-04T17:29:41Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4600.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4600",
"merged_at": "2022-07-04T17:29:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4600.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4600"
} | This PR removes docs for a future feature and redirects to #4578 instead. See this [discussion](https://huggingface.slack.com/archives/C034N0A7H09/p1656107063801969) for more details :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4600/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4600/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5274 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5274/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5274/comments | https://api.github.com/repos/huggingface/datasets/issues/5274/events | https://github.com/huggingface/datasets/issues/5274 | 1,458,646,455 | I_kwDODunzps5W8S23 | 5,274 | load_dataset possibly broken for gated datasets? | {
"avatar_url": "https://avatars.githubusercontent.com/u/20826878?v=4",
"events_url": "https://api.github.com/users/TristanThrush/events{/privacy}",
"followers_url": "https://api.github.com/users/TristanThrush/followers",
"following_url": "https://api.github.com/users/TristanThrush/following{/other_user}",
"gists_url": "https://api.github.com/users/TristanThrush/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TristanThrush",
"id": 20826878,
"login": "TristanThrush",
"node_id": "MDQ6VXNlcjIwODI2ODc4",
"organizations_url": "https://api.github.com/users/TristanThrush/orgs",
"received_events_url": "https://api.github.com/users/TristanThrush/received_events",
"repos_url": "https://api.github.com/users/TristanThrush/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TristanThrush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TristanThrush/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TristanThrush"
} | [] | closed | false | null | [] | null | [
"@BradleyHsu",
"Btw, thanks very much for finding the hub rollback temporary fix and bringing the issue to our attention @KhoomeiK!",
"I see the same issue when calling `load_dataset('poloclub/diffusiondb', 'large_random_1k')` with `datasets==2.7.1` and `huggingface-hub=0.11.0`. No issue with `datasets=2.6.1` and `huggingface_hub==0.10.1`.\r\n\r\nhttps://github.com/poloclub/diffusiondb/issues/7",
"I fixed my issue by specifying `repo_type` in `hf_hub_url()`. https://github.com/poloclub/diffusiondb/commit/9eb91c79aaca98b0515a0ce45778b8af65b84652\r\n\r\nI opened a PR on the Winoground's repo: https://huggingface.co/datasets/facebook/winoground/discussions/2",
"This is a bug in the script, indeed. The most robust fix is to use a relative path instead of `hf_hub_url`, which does not depend on `huggingface_hub`'s version 🙂. I've opened a PR here: https://huggingface.co/datasets/facebook/winoground/discussions/3.",
"Awesome, big thanks to both @xiaohk and @mariosasko!",
"so, if i reproduce the bug, what should i do ? with huggingface_hub0.13.3 dataset2.6.1",
"huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name':\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(ARGS.model_path, trust_remote_code=True)\r\n\r\nPlease handle automatically for local path and repo name inside, otherwise users always get confused about this",
"I think I'm also hitting this error, trying to load a model from a local path."
] | "2022-11-21T21:59:53Z" | "2023-05-27T00:06:14Z" | "2022-11-28T02:50:42Z" | CONTRIBUTOR | null | null | null | ### Describe the bug
When trying to download the [winoground dataset](https://huggingface.co/datasets/facebook/winoground), I get this error unless I roll back the version of huggingface-hub:
```
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in validate_repo_id(repo_id)
165 if repo_id.count("/") > 1:
166 raise HFValidationError(
--> 167 "Repo id must be in the form 'repo_name' or 'namespace/repo_name':"
168 f" '{repo_id}'. Use `repo_type` argument if needed."
169 )
HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'datasets/facebook/winoground'. Use `repo_type` argument if needed
```
### Steps to reproduce the bug
Install requirements:
```
pip install transformers
pip install datasets
# It works if you uncomment the following line, rolling back huggingface hub:
# pip install huggingface-hub==0.10.1
```
Then:
```
from datasets import load_dataset
auth_token = "" # Replace with an auth token, which you can get from your huggingface account: Profile -> Settings -> Access Tokens -> New Token
winoground = load_dataset("facebook/winoground", use_auth_token=auth_token)["test"]
```
### Expected behavior
Downloading of the datset
### Environment info
Just a google colab; see here: https://colab.research.google.com/drive/15wwOSte2CjTazdnCWYUm2VPlFbk2NGc0?usp=sharing | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5274/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5274/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1925/comments | https://api.github.com/repos/huggingface/datasets/issues/1925/events | https://github.com/huggingface/datasets/pull/1925 | 813,600,902 | MDExOlB1bGxSZXF1ZXN0NTc3NzIyMzc3 | 1,925 | Fix: Wiki_dpr - fix when with_embeddings is False or index_name is "no_index" | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq ,\r\n\r\nI am running into an issue now when trying to run RAG. Running exactly as described [here](https://huggingface.co/facebook/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage) I get the error below. Wondering if it's related to this.\r\n\r\nRunning Transformers 4.3.2 with datasets installed from source from `master` branch.\r\n\r\n```bash\r\n(venv) sergey_mkrtchyan datasets (master) $ python\r\nPython 3.8.6 (v3.8.6:db455296be, Sep 23 2020, 13:31:39)\r\n[Clang 6.0 (clang-600.0.57)] on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration\r\n>>> tokenizer = RagTokenizer.from_pretrained(\"facebook/rag-token-nq\")\r\n>>> retriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True)\r\nUsing custom data configuration dummy.psgs_w100.nq.no_index-dummy=True,with_index=False\r\nReusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.no_index-dummy=True,with_index=False/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)\r\nUsing custom data configuration dummy.psgs_w100.nq.exact-50b6cda57ff32ab4\r\nReusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.exact-50b6cda57ff32ab4/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)\r\n 0%| | 0/10 [00:00<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py\", line 425, in from_pretrained\r\n return cls(\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py\", line 387, in __init__\r\n self.init_retrieval()\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py\", line 458, in init_retrieval\r\n self.index.init_index()\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py\", line 284, in init_index\r\n self.dataset = load_dataset(\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/load.py\", line 750, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py\", line 734, in as_dataset\r\n datasets = utils.map_nested(\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/utils/py_utils.py\", line 195, in map_nested\r\n return function(data_struct)\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py\", line 769, in _build_single_dataset\r\n post_processed = self._post_process(ds, resources_paths)\r\n File \"/Users/sergey_mkrtchyan/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb/wiki_dpr.py\", line 205, in _post_process\r\n dataset.add_faiss_index(\"embeddings\", custom_index=index)\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/arrow_dataset.py\", line 2516, in add_faiss_index\r\n super().add_faiss_index(\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py\", line 416, in add_faiss_index\r\n faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=faiss_verbose)\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py\", line 281, in add_vectors\r\n self.faiss_index.add(vecs)\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/__init__.py\", line 104, in replacement_add\r\n self.add_c(n, swig_ptr(x))\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/swigfaiss.py\", line 3263, in add\r\n return _swigfaiss.IndexHNSW_add(self, n, x)\r\nRuntimeError: Error in virtual void faiss::IndexHNSW::add(faiss::Index::idx_t, const float *) at /Users/runner/work/faiss-wheels/faiss-wheels/faiss/faiss/IndexHNSW.cpp:356: Error: 'is_trained' failed\r\n>>>\r\n```\r\n\r\nThe error message is hinting that it could be related to this, but I might be wrong. Any ideas?\r\n\r\n\r\nEdit: Can confirm it's working fine with datasets==1.2.0\r\n\r\nDouble Edit: Did some further digging. The issue is related to this commit: 8c5220307c33f00e01c3bf7b8. I opened a separate issue #1941 for proper tracking."
] | "2021-02-22T15:23:46Z" | "2021-02-25T01:33:48Z" | "2021-02-22T15:36:08Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1925.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1925",
"merged_at": "2021-02-22T15:36:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1925.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1925"
} | Fix the bugs noticed in #1915
There was a bug when `with_embeddings=False` where the configuration name was the same as if `with_embeddings=True`, which led the dataset builder to do bad verifications (for example it used to expect to download the embeddings for `with_embeddings=False`).
Another issue was that setting `index_name="no_index"` didn't set `with_index` to False.
I fixed both of them and added dummy data for those configurations for testing. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1925/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1925/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4471 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4471/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4471/comments | https://api.github.com/repos/huggingface/datasets/issues/4471/events | https://github.com/huggingface/datasets/issues/4471 | 1,267,475,268 | I_kwDODunzps5LjCNE | 4,471 | CI error with repo lhoestq/_dummy | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"fixed by https://github.com/huggingface/datasets/pull/4472"
] | "2022-06-10T12:26:06Z" | "2022-06-10T13:24:53Z" | "2022-06-10T13:24:53Z" | MEMBER | null | null | null | ## Describe the bug
CI is failing because of repo "lhoestq/_dummy". See: https://app.circleci.com/pipelines/github/huggingface/datasets/12461/workflows/1b040b45-9578-4ab9-8c44-c643c4eb8691/jobs/74269
```
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/datasets/lhoestq/_dummy?full=true
```
The repo seems to no longer exist: https://huggingface.co/api/datasets/lhoestq/_dummy
```
error: "Repository not found"
```
CC: @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4471/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4471/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3282/comments | https://api.github.com/repos/huggingface/datasets/issues/3282/events | https://github.com/huggingface/datasets/issues/3282 | 1,055,054,898 | I_kwDODunzps4-4twy | 3,282 | ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/10078549?v=4",
"events_url": "https://api.github.com/users/MinionAttack/events{/privacy}",
"followers_url": "https://api.github.com/users/MinionAttack/followers",
"following_url": "https://api.github.com/users/MinionAttack/following{/other_user}",
"gists_url": "https://api.github.com/users/MinionAttack/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MinionAttack",
"id": 10078549,
"login": "MinionAttack",
"node_id": "MDQ6VXNlcjEwMDc4NTQ5",
"organizations_url": "https://api.github.com/users/MinionAttack/orgs",
"received_events_url": "https://api.github.com/users/MinionAttack/received_events",
"repos_url": "https://api.github.com/users/MinionAttack/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MinionAttack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MinionAttack/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MinionAttack"
} | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting :)\r\nI think this is because the dataset is behind an access page. We can fix the dataset viewer\r\n\r\nIf you also have this error when you use the `datasets` library in python, you should probably pass `use_auth_token=True` to the `load_dataset()` function to use your account to access the dataset.",
"Ah ok, I didn't realise about the login page. I'll try `use_auth_token=True` and see if that solves it.\r\n\r\nRegards!",
"Hi, \r\n\r\nUsing `use_auth_token=True` and downloading the credentials with `huggingface-cli login` (stored in .huggingface/token) solved the issue.\r\n\r\nShould I leave the issue open until you fix the Dataset viewer issue?",
"Cool ! Yes let's keep this issue open until the viewer is fixed - I'll close it when this is fixed. Thanks",
"The error I get when trying to load OSCAR 21.09 is this\r\n```\r\nConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py\r\n```\r\n\r\nThe URL I get in the browser is this\r\n```\r\nhttps://huggingface.co/datasets/oscar-corpus/OSCAR-2109/blob/main/OSCAR-2109.py\r\n```\r\n\r\nMaybe URL is the issue? (resolve vs blob)",
"> The error I get when trying to load OSCAR 21.09 is this\r\n> \r\n> ```\r\n> ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py\r\n> ```\r\n> \r\n> The URL I get in the browser is this\r\n> \r\n> ```\r\n> https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/blob/main/OSCAR-2109.py\r\n> ```\r\n> \r\n> Maybe URL is the issue? (resolve vs blob)\r\n\r\nYou need to download your login credentials. See `huggingface-cli login` documentation and when loading the dataset use `use_auth_token=True`:\r\n`\r\nload_dataset(corpus, language, split=None, use_auth_token=True, cache_dir=cache_folder)`",
"Fixed.\r\n\r\n<img width=\"1542\" alt=\"Capture d’écran 2022-04-12 à 13 57 24\" src=\"https://user-images.githubusercontent.com/1676121/162957585-af96d19c-f86c-47fe-80c4-2b071083cee4.png\">\r\n"
] | "2021-11-16T16:05:19Z" | "2022-04-12T11:57:43Z" | "2022-04-12T11:57:43Z" | NONE | null | null | null | ## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)*
*The dataset library cannot download any language from the oscar-corpus/OSCAR-2109 dataset. By entering the URL in your browser I can access the file.*
```
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py
```
Am I the one who added this dataset ? No
Using the older version of [OSCAR](https://huggingface.co/datasets/oscar) I don't have any issues downloading languages with the dataset library. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3282/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3282/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1254/comments | https://api.github.com/repos/huggingface/datasets/issues/1254/events | https://github.com/huggingface/datasets/pull/1254 | 758,518,774 | MDExOlB1bGxSZXF1ZXN0NTMzNjc5MTYy | 1,254 | Added WikiText-TL-39 | {
"avatar_url": "https://avatars.githubusercontent.com/u/24757547?v=4",
"events_url": "https://api.github.com/users/jcblaisecruz02/events{/privacy}",
"followers_url": "https://api.github.com/users/jcblaisecruz02/followers",
"following_url": "https://api.github.com/users/jcblaisecruz02/following{/other_user}",
"gists_url": "https://api.github.com/users/jcblaisecruz02/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jcblaisecruz02",
"id": 24757547,
"login": "jcblaisecruz02",
"node_id": "MDQ6VXNlcjI0NzU3NTQ3",
"organizations_url": "https://api.github.com/users/jcblaisecruz02/orgs",
"received_events_url": "https://api.github.com/users/jcblaisecruz02/received_events",
"repos_url": "https://api.github.com/users/jcblaisecruz02/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jcblaisecruz02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcblaisecruz02/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jcblaisecruz02"
} | [] | closed | false | null | [] | null | [
"looks like this PR also includes changes about another dataset `covid_qa_deepset`\r\n\r\nCould you create another branch and another PR that only includes the changes for the wikitext-tl-39 dataset ?"
] | "2020-12-07T13:43:48Z" | "2020-12-08T16:00:58Z" | "2020-12-08T16:00:58Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1254.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1254",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1254.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1254"
} | This PR adds the WikiText-TL-39 Filipino Language Modeling dataset.
Paper: https://arxiv.org/abs/1907.00409
Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1254/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1254/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2219/comments | https://api.github.com/repos/huggingface/datasets/issues/2219/events | https://github.com/huggingface/datasets/pull/2219 | 857,321,242 | MDExOlB1bGxSZXF1ZXN0NjE0NzYxMzA3 | 2,219 | Added CUAD dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [] | closed | false | null | [] | null | [
"1) Changed the language in a few places apart from those you mentioned in README\r\n2) Reduced the size of dummy data folder by removing all other entries except the first\r\n3) Updated YAML tags by using to the past version of `datasets-tagging` app. Will update the quick fix on that repository too in a while",
"@bhavitvyamalik Thanks for adding the dataset on huggingface! Can you please add a metric also for the dataset using the squad_v2 metric file? ",
"@MohammedRakib you can check [#2257](https://github.com/huggingface/datasets/pull/2257)"
] | "2021-04-13T21:05:03Z" | "2021-04-24T14:25:51Z" | "2021-04-16T08:50:44Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2219.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2219",
"merged_at": "2021-04-16T08:50:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2219.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2219"
} | Dataset link : https://github.com/TheAtticusProject/cuad/
Working on README.md currently.
Closes #2084 and [#1](https://github.com/TheAtticusProject/cuad/issues/1). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2219/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2219/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4411/comments | https://api.github.com/repos/huggingface/datasets/issues/4411/events | https://github.com/huggingface/datasets/pull/4411 | 1,249,462,390 | PR_kwDODunzps44g_yL | 4,411 | Update `_format_columns` in `remove_columns` | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [] | closed | false | null | [] | null | [
"🤗 This PR closes https://github.com/huggingface/datasets/issues/4398",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi! Thanks for reporting and providing a fix. I made a small change to make the fix easier to understand.",
"Hi, @mariosasko thanks! It makes sense, sorry I'm not that familiar with `datasets` code 😩 ",
"Sure @albertvillanova I'll do that later today and ping you once done, thanks! :hugs:",
"Hi again @albertvillanova! Let me know if those tests are fine 🤗 ",
"Hi @alvarobartt,\r\n\r\nI think your tests are failing. I don't know why previously, after your last commit, the CI tests were not triggered. \r\n\r\nIn order to force the re-running of the CI tests, I had to edit your file using the GitHub UI.\r\n\r\nFirst I tried to do it using my terminal, but I don't have push right to your PR branch: I would ask you next time you open a PR, please mark the checkbox \"Allow edits from maintainers\": https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork#enabling-repository-maintainer-permissions-on-existing-pull-requests",
"Hi @albertvillanova, let me check those again! And regarding that checkbox I thought it was already checked so my bad there 😩 ",
"@albertvillanova again it seems that the tests were not automatically triggered, but I tested those locally and now they work, as previously those were failing as I created an assertion as `self.assertEqual` over an empty list that was compared as `None` while the value was `[]` so I updated it to be `self.assertListEqual` and changed the comparison value to `[]`.",
"@lhoestq any idea why the CI is not triggered?",
"@alvarobartt I have tested locally and the tests continue failing.\r\n\r\nI think there is a basis error: `new_dset._format_columns` is always `None` in those cases.\r\n",
"You're right @albertvillanova I was indeed running the tests with `datasets==2.2.0` rather than with the branch version, I'll check it again! Sorry for the inconvenience...",
"> @alvarobartt I have tested locally and the tests continue failing.\r\n> \r\n> I think there is a basis error: `new_dset._format_columns` is always `None` in those cases.\r\n\r\nIn order to have some regressions tests for the fixed scenario, I've manually updated the value of `_format_columns` in the `ArrowDataset` so as to check whether it's updated or not right after calling `remove_columns`, and it does behave as expected, so with the latest version of this branch the reported issue doesn't occur anymore.",
"Hi again @albertvillanova sorry I was on leave! I'll do that ASAP :hugs:",
"@albertvillanova, does it make sense to add regression tests for `DatasetDict`? As `DatasetDict` doesn't have the attribute `_format_columns`, when we call `remove_columns` over a `DatasetDict` it removes the columns and updates the attributes of each split which is an `ArrowDataset`.\r\n\r\nSo on, we can either:\r\n- Update first the `_format_columns` attribute of each split and then remove the columns over the `DatasetDict`\r\n- Loop over the splits of `DatasetDict` and remove the columns right after updating `_format_columns` of each `ArrowDataset`.\r\n\r\nI assume that the best regression test is the one implemented (mentioned first above), let me know if there's a better way to do that 👍🏻 ",
"I think there's already a decorator to support transmitting the right `_format_columns`: `@transmit_format`, have you tried adding this decorator to `remove_columns` ?",
"> I think there's already a decorator to support transmitting the right `_format_columns`: `@transmit_format`, have you tried adding this decorator to `remove_columns` ?\r\n\r\nHi @lhoestq I can check now!",
"It worked indeed @lhoestq, thanks for the proposal and the review! 🤗 ",
"Oops, I forgot about `@transmit_format`'s existence. From what I see, we should also use this decorator in `flatten`, `rename_column` and `rename_columns`. \r\n\r\n@alvarobartt Let me know if you'd like to work on this (in a subsequent PR).",
"Sure @mariosasko I can prepare another PR to add those too, thanks! "
] | "2022-05-26T11:40:06Z" | "2022-06-14T19:05:37Z" | "2022-06-14T16:01:56Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4411.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4411",
"merged_at": "2022-06-14T16:01:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4411.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4411"
} | As explained at #4398, when calling `dataset.add_faiss_index` under certain conditions when calling a sequence of operations `cast_column`, `map`, and `remove_columns`, it fails as it's trying to look for already removed columns.
So on, after testing some possible fixes, it seems that setting the dataset format right after removing the columns seems to be working fine, so I had to add a call to `.set_format` in the `remove_columns` function.
Hope this helps! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4411/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4411/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5583 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5583/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5583/comments | https://api.github.com/repos/huggingface/datasets/issues/5583/events | https://github.com/huggingface/datasets/pull/5583 | 1,601,583,625 | PR_kwDODunzps5K2mIz | 5,583 | Do no write index by default when exporting a dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009044 / 0.011353 (-0.002309) | 0.004244 / 0.011008 (-0.006765) | 0.106705 / 0.038508 (0.068197) | 0.029779 / 0.023109 (0.006670) | 0.289684 / 0.275898 (0.013786) | 0.347100 / 0.323480 (0.023620) | 0.007071 / 0.007986 (-0.000915) | 0.003734 / 0.004328 (-0.000595) | 0.077971 / 0.004250 (0.073720) | 0.035323 / 0.037052 (-0.001730) | 0.334520 / 0.258489 (0.076031) | 0.375804 / 0.293841 (0.081964) | 0.049211 / 0.128546 (-0.079335) | 0.016992 / 0.075646 (-0.058654) | 0.337208 / 0.419271 (-0.082064) | 0.053700 / 0.043533 (0.010167) | 0.295750 / 0.255139 (0.040611) | 0.330157 / 0.283200 (0.046958) | 0.097017 / 0.141683 (-0.044666) | 1.379353 / 1.452155 (-0.072802) | 1.402670 / 1.492716 (-0.090047) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012685 / 0.018006 (-0.005321) | 0.474541 / 0.000490 (0.474051) | 0.006752 / 0.000200 (0.006552) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025735 / 0.037411 (-0.011676) | 0.092507 / 0.014526 (0.077982) | 0.100275 / 0.176557 (-0.076281) | 0.180359 / 0.737135 (-0.556777) | 0.104312 / 0.296338 (-0.192026) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456558 / 0.215209 (0.241349) | 4.786667 / 2.077655 (2.709012) | 1.873169 / 1.504120 (0.369050) | 1.640935 / 1.541195 (0.099741) | 1.614543 / 1.468490 (0.146053) | 0.936144 / 4.584777 (-3.648633) | 4.699886 / 3.745712 (0.954174) | 2.398545 / 5.269862 (-2.871317) | 1.642808 / 4.565676 (-2.922868) | 0.124803 / 0.424275 (-0.299472) | 0.011848 / 0.007607 (0.004241) | 0.631684 / 0.226044 (0.405639) | 6.096052 / 2.268929 (3.827124) | 2.463052 / 55.444624 (-52.981572) | 1.928551 / 6.876477 (-4.947926) | 1.927790 / 2.142072 (-0.214283) | 1.098912 / 4.805227 (-3.706315) | 0.196343 / 6.500664 (-6.304321) | 0.063296 / 0.075469 (-0.012173) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255032 / 1.841788 (-0.586755) | 13.853623 / 8.074308 (5.779315) | 16.303280 / 10.191392 (6.111888) | 0.227287 / 0.680424 (-0.453137) | 0.037527 / 0.534201 (-0.496674) | 0.449345 / 0.579283 (-0.129938) | 0.522054 / 0.434364 (0.087690) | 0.552848 / 0.540337 (0.012511) | 0.642994 / 1.386936 (-0.743942) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008470 / 0.011353 (-0.002883) | 0.005167 / 0.011008 (-0.005841) | 0.077794 / 0.038508 (0.039286) | 0.029228 / 0.023109 (0.006119) | 0.340828 / 0.275898 (0.064930) | 0.400170 / 0.323480 (0.076691) | 0.005485 / 0.007986 (-0.002500) | 0.003854 / 0.004328 (-0.000475) | 0.077597 / 0.004250 (0.073346) | 0.036519 / 0.037052 (-0.000533) | 0.335522 / 0.258489 (0.077033) | 0.412622 / 0.293841 (0.118781) | 0.044587 / 0.128546 (-0.083959) | 0.016024 / 0.075646 (-0.059623) | 0.092312 / 0.419271 (-0.326960) | 0.055660 / 0.043533 (0.012127) | 0.343140 / 0.255139 (0.088001) | 0.386403 / 0.283200 (0.103203) | 0.098634 / 0.141683 (-0.043049) | 1.326126 / 1.452155 (-0.126029) | 1.430316 / 1.492716 (-0.062400) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222807 / 0.018006 (0.204801) | 0.473622 / 0.000490 (0.473132) | 0.000376 / 0.000200 (0.000176) | 0.000066 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024599 / 0.037411 (-0.012813) | 0.100743 / 0.014526 (0.086217) | 0.112086 / 0.176557 (-0.064471) | 0.198294 / 0.737135 (-0.538842) | 0.111210 / 0.296338 (-0.185129) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.494120 / 0.215209 (0.278911) | 5.117958 / 2.077655 (3.040303) | 2.305131 / 1.504120 (0.801011) | 2.015591 / 1.541195 (0.474396) | 2.027284 / 1.468490 (0.558794) | 1.014241 / 4.584777 (-3.570536) | 4.738836 / 3.745712 (0.993124) | 2.519718 / 5.269862 (-2.750143) | 1.706379 / 4.565676 (-2.859298) | 0.122452 / 0.424275 (-0.301824) | 0.011500 / 0.007607 (0.003893) | 0.632864 / 0.226044 (0.406820) | 6.295457 / 2.268929 (4.026529) | 2.824897 / 55.444624 (-52.619727) | 2.324359 / 6.876477 (-4.552117) | 2.281046 / 2.142072 (0.138974) | 1.173570 / 4.805227 (-3.631657) | 0.197195 / 6.500664 (-6.303469) | 0.064845 / 0.075469 (-0.010624) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.273224 / 1.841788 (-0.568563) | 14.531155 / 8.074308 (6.456847) | 15.892176 / 10.191392 (5.700784) | 0.208051 / 0.680424 (-0.472373) | 0.023119 / 0.534201 (-0.511082) | 0.422317 / 0.579283 (-0.156966) | 0.519946 / 0.434364 (0.085582) | 0.544517 / 0.540337 (0.004179) | 0.605955 / 1.386936 (-0.780981) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#337a4a91d0268c68f26760321c9b45bb4a98832a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010806 / 0.011353 (-0.000547) | 0.005631 / 0.011008 (-0.005378) | 0.113166 / 0.038508 (0.074657) | 0.042980 / 0.023109 (0.019871) | 0.344856 / 0.275898 (0.068958) | 0.404417 / 0.323480 (0.080938) | 0.012222 / 0.007986 (0.004236) | 0.004470 / 0.004328 (0.000141) | 0.088072 / 0.004250 (0.083822) | 0.049815 / 0.037052 (0.012763) | 0.366532 / 0.258489 (0.108043) | 0.392558 / 0.293841 (0.098717) | 0.045411 / 0.128546 (-0.083135) | 0.014118 / 0.075646 (-0.061529) | 0.392894 / 0.419271 (-0.026378) | 0.067713 / 0.043533 (0.024181) | 0.353013 / 0.255139 (0.097874) | 0.378375 / 0.283200 (0.095175) | 0.123686 / 0.141683 (-0.017996) | 1.665272 / 1.452155 (0.213118) | 1.748383 / 1.492716 (0.255667) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011672 / 0.018006 (-0.006335) | 0.481667 / 0.000490 (0.481178) | 0.003644 / 0.000200 (0.003444) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030436 / 0.037411 (-0.006976) | 0.122577 / 0.014526 (0.108052) | 0.135409 / 0.176557 (-0.041148) | 0.220385 / 0.737135 (-0.516750) | 0.143140 / 0.296338 (-0.153199) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471146 / 0.215209 (0.255937) | 4.645023 / 2.077655 (2.567368) | 2.126783 / 1.504120 (0.622663) | 1.907905 / 1.541195 (0.366710) | 1.969561 / 1.468490 (0.501071) | 0.798670 / 4.584777 (-3.786107) | 4.394787 / 3.745712 (0.649075) | 2.353535 / 5.269862 (-2.916327) | 1.501013 / 4.565676 (-3.064664) | 0.097472 / 0.424275 (-0.326803) | 0.014015 / 0.007607 (0.006408) | 0.589365 / 0.226044 (0.363320) | 5.897331 / 2.268929 (3.628402) | 2.656198 / 55.444624 (-52.788427) | 2.256082 / 6.876477 (-4.620395) | 2.271122 / 2.142072 (0.129050) | 0.961566 / 4.805227 (-3.843661) | 0.188303 / 6.500664 (-6.312361) | 0.073258 / 0.075469 (-0.002211) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.445266 / 1.841788 (-0.396522) | 16.876710 / 8.074308 (8.802402) | 16.004287 / 10.191392 (5.812895) | 0.212252 / 0.680424 (-0.468172) | 0.033186 / 0.534201 (-0.501015) | 0.520564 / 0.579283 (-0.058719) | 0.516865 / 0.434364 (0.082501) | 0.638482 / 0.540337 (0.098144) | 0.761959 / 1.386936 (-0.624977) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008101 / 0.011353 (-0.003252) | 0.005512 / 0.011008 (-0.005497) | 0.086138 / 0.038508 (0.047630) | 0.038605 / 0.023109 (0.015496) | 0.413082 / 0.275898 (0.137184) | 0.444016 / 0.323480 (0.120536) | 0.006196 / 0.007986 (-0.001790) | 0.005736 / 0.004328 (0.001408) | 0.086938 / 0.004250 (0.082688) | 0.052307 / 0.037052 (0.015255) | 0.415206 / 0.258489 (0.156717) | 0.481510 / 0.293841 (0.187669) | 0.041469 / 0.128546 (-0.087077) | 0.013481 / 0.075646 (-0.062165) | 0.101528 / 0.419271 (-0.317744) | 0.056507 / 0.043533 (0.012974) | 0.418166 / 0.255139 (0.163027) | 0.443834 / 0.283200 (0.160634) | 0.116434 / 0.141683 (-0.025249) | 1.651223 / 1.452155 (0.199068) | 1.746429 / 1.492716 (0.253713) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242381 / 0.018006 (0.224375) | 0.478826 / 0.000490 (0.478337) | 0.000463 / 0.000200 (0.000264) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031743 / 0.037411 (-0.005668) | 0.126141 / 0.014526 (0.111616) | 0.134539 / 0.176557 (-0.042018) | 0.216546 / 0.737135 (-0.520590) | 0.143513 / 0.296338 (-0.152825) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.486915 / 0.215209 (0.271706) | 4.833812 / 2.077655 (2.756158) | 2.317785 / 1.504120 (0.813666) | 2.114181 / 1.541195 (0.572986) | 2.153896 / 1.468490 (0.685406) | 0.797490 / 4.584777 (-3.787287) | 4.369950 / 3.745712 (0.624238) | 2.305492 / 5.269862 (-2.964370) | 1.488860 / 4.565676 (-3.076816) | 0.098071 / 0.424275 (-0.326204) | 0.014129 / 0.007607 (0.006522) | 0.611311 / 0.226044 (0.385266) | 6.087482 / 2.268929 (3.818554) | 2.837676 / 55.444624 (-52.606948) | 2.451819 / 6.876477 (-4.424657) | 2.456763 / 2.142072 (0.314690) | 0.957637 / 4.805227 (-3.847590) | 0.190974 / 6.500664 (-6.309690) | 0.074497 / 0.075469 (-0.000972) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.466214 / 1.841788 (-0.375574) | 17.063925 / 8.074308 (8.989617) | 14.630326 / 10.191392 (4.438934) | 0.170570 / 0.680424 (-0.509854) | 0.023794 / 0.534201 (-0.510407) | 0.509175 / 0.579283 (-0.070108) | 0.506485 / 0.434364 (0.072121) | 0.616965 / 0.540337 (0.076628) | 0.718176 / 1.386936 (-0.668760) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c4f14de325e26910d026f377756dd8a231150398 \"CML watermark\")\n"
] | "2023-02-27T17:04:46Z" | "2023-02-28T13:52:15Z" | "2023-02-28T13:44:04Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5583.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5583",
"merged_at": "2023-02-28T13:44:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5583.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5583"
} | Ensures all the writers that use Pandas for conversion (JSON, CSV, SQL) do not export `index` by default (https://github.com/huggingface/datasets/pull/5490 only did this for CSV) | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5583/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5583/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4790/comments | https://api.github.com/repos/huggingface/datasets/issues/4790/events | https://github.com/huggingface/datasets/issues/4790 | 1,328,546,904 | I_kwDODunzps5PMARY | 4,790 | Issue with fine classes in trec dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | "2022-08-04T12:28:51Z" | "2022-08-22T16:14:16Z" | "2022-08-22T16:14:16Z" | MEMBER | null | null | null | ## Describe the bug
According to their paper, the TREC dataset contains 2 kinds of classes:
- 6 coarse classes: TREC-6
- 50 fine classes: TREC-50
However, our implementation only has 47 (instead of 50) fine classes. The reason for this is that we only considered the last segment of the label, which is repeated for several coarse classes:
- We have one `desc` fine label instead of 2:
- `DESC:desc`
- `HUM:desc`
- We have one `other` fine label instead of 3:
- `ENTY:other`
- `LOC:other`
- `NUM:other`
From their paper:
> We define a two-layered taxonomy, which represents a natural semantic classification for typical answers in the TREC task. The hierarchy contains 6 coarse classes and 50 fine classes,
> Each coarse class contains a non-overlapping set of fine classes.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4790/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4790/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2506/comments | https://api.github.com/repos/huggingface/datasets/issues/2506/events | https://github.com/huggingface/datasets/pull/2506 | 921,435,598 | MDExOlB1bGxSZXF1ZXN0NjcwNDM4NTgx | 2,506 | Add course banner | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
} | [] | closed | false | null | [] | null | [] | "2021-06-15T14:03:54Z" | "2021-06-15T16:25:36Z" | "2021-06-15T16:25:35Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2506.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2506",
"merged_at": "2021-06-15T16:25:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2506.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2506"
} | This PR adds a course banner similar to the one you can now see in the [Transformers repo](https://github.com/huggingface/transformers) that links to the course. Let me know if placement seems right to you or not, I can move it just below the badges too. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2506/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2506/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6120/comments | https://api.github.com/repos/huggingface/datasets/issues/6120/events | https://github.com/huggingface/datasets/issues/6120 | 1,836,026,938 | I_kwDODunzps5tb4w6 | 6,120 | Lookahead streaming support? | {
"avatar_url": "https://avatars.githubusercontent.com/u/17175484?v=4",
"events_url": "https://api.github.com/users/PicoCreator/events{/privacy}",
"followers_url": "https://api.github.com/users/PicoCreator/followers",
"following_url": "https://api.github.com/users/PicoCreator/following{/other_user}",
"gists_url": "https://api.github.com/users/PicoCreator/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PicoCreator",
"id": 17175484,
"login": "PicoCreator",
"node_id": "MDQ6VXNlcjE3MTc1NDg0",
"organizations_url": "https://api.github.com/users/PicoCreator/orgs",
"received_events_url": "https://api.github.com/users/PicoCreator/received_events",
"repos_url": "https://api.github.com/users/PicoCreator/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PicoCreator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PicoCreator/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PicoCreator"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"In which format is your dataset? We could expose the `pre_buffer` flag for Parquet to use PyArrow's background thread pool to speed up loading. "
] | "2023-08-04T04:01:52Z" | "2023-08-17T17:48:42Z" | null | NONE | null | null | null | ### Feature request
From what I understand, streaming dataset currently pulls the data, and process the data as it is requested.
This can introduce significant latency delays when data is loaded into the training process, needing to wait for each segment.
While the delays might be dataset specific (or even mapping instruction/tokenizer specific)
Is it possible to introduce a `streaming_lookahead` parameter, which is used for predictable workloads (even shuffled dataset with fixed seed). As we can predict in advance what the next few datasamples will be. And fetch them while the current set is being trained.
With enough CPU & bandwidth to keep up with the training process, and a sufficiently large lookahead, this will reduce the various latency involved while waiting for the dataset to be ready between batches.
### Motivation
Faster streaming performance, while training over extra large TB sized datasets
### Your contribution
I currently use HF dataset, with pytorch lightning trainer for RWKV project, and would be able to help test this feature if supported. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6120/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6120/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1236 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1236/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1236/comments | https://api.github.com/repos/huggingface/datasets/issues/1236/events | https://github.com/huggingface/datasets/pull/1236 | 758,263,012 | MDExOlB1bGxSZXF1ZXN0NTMzNDYzOTg2 | 1,236 | Opus finlex dataset of language pair Finnish and Swedish | {
"avatar_url": "https://avatars.githubusercontent.com/u/6419011?v=4",
"events_url": "https://api.github.com/users/spatil6/events{/privacy}",
"followers_url": "https://api.github.com/users/spatil6/followers",
"following_url": "https://api.github.com/users/spatil6/following{/other_user}",
"gists_url": "https://api.github.com/users/spatil6/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/spatil6",
"id": 6419011,
"login": "spatil6",
"node_id": "MDQ6VXNlcjY0MTkwMTE=",
"organizations_url": "https://api.github.com/users/spatil6/orgs",
"received_events_url": "https://api.github.com/users/spatil6/received_events",
"repos_url": "https://api.github.com/users/spatil6/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/spatil6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spatil6/subscriptions",
"type": "User",
"url": "https://api.github.com/users/spatil6"
} | [] | closed | false | null | [] | null | [] | "2020-12-07T07:53:57Z" | "2020-12-08T13:30:33Z" | "2020-12-08T13:30:33Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1236.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1236",
"merged_at": "2020-12-08T13:30:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1236.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1236"
} | Added Opus_finlex dataset of language pair Finnish and Swedish
More info : http://opus.nlpl.eu/Finlex.php | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1236/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1236/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2329/comments | https://api.github.com/repos/huggingface/datasets/issues/2329/events | https://github.com/huggingface/datasets/pull/2329 | 877,924,198 | MDExOlB1bGxSZXF1ZXN0NjMxODA3MTk0 | 2,329 | Add cache dir for in-memory datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"Yes, having `cache_dir` as an attribute looks cleaner.\r\n\r\n\r\n\r\n",
"Good job! Looking forward to this new feature! 🥂",
"@lhoestq Sorry for the late reply. Yes, I'll start working on tests. Thanks for the detailed explanation of the current issues with caching (like the idea of adding the `use_caching` parameter to `load_dataset`) ",
"@lhoestq Sure. I'm aware this is a high-priority issue to some extent, so feel free to take over.\r\n\r\nFew suggestions I have:\r\n* there is a slight difference between setting `use_caching` to `False` in `load_dataset` and disabling caching globally with `set_caching_enabled(False)` because the former will never execute the following code (`self._cache_dir` is always `False`): \r\nhttps://github.com/huggingface/datasets/blob/c231abdb174987419bbde3360b5b9d6a4672c736/src/datasets/arrow_dataset.py#L1807-L1824\r\n, so I'm just checking whether this is intended (if yes, maybe the docs should mention this) or not?\r\n* think we should add the `use_caching` parameter to every method that has the `keep_in_memory` (and `in_memory` 😃) parameter in its signature for better consistency, but I say let's address this in a separate PR. IMO we need one more PR that will deal exclusively with consistency in the caching logic.",
"Hi @mariosasko \r\nWe discussed internally and we think that this feature might not be the direction we're doing to take for these reasons:\r\n\r\n- it goes against our simple definition of caching: on-disk == uses file cache, and in-memory == nothing is written to disk. I think it adds too much complexity just for a minimal flexibility addition\r\n- there are a few edge cases which are really confusing:\r\n - map on an in memory dataset with a cache_file_name specified by the user -> should the result be in memory or from disk ?\r\n - it would require a special cache directory just for in memory datasets, since they don’t have a preferred directory for caching\r\n- it would break a lot of stuff and would require to rewrite significant parts of the core code and the tests\r\n\r\n\r\nSo in the end we're probably going to close this PR.\r\nLet me know what you think, and thank you anyway for your help on this !",
"Hi,\r\n\r\nI'm fine with that. I agree this adds too much complexity. Btw, I like the idea of reverting default in-memory for small datasets that led to this PR.",
"Superseded by #2460 (to close issue #2458)."
] | "2021-05-06T19:35:32Z" | "2021-06-08T19:46:48Z" | "2021-06-08T19:06:46Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2329.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2329",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2329.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2329"
} | Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq.
Should fix #2322 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2329/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2329/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1577 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1577/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1577/comments | https://api.github.com/repos/huggingface/datasets/issues/1577/events | https://github.com/huggingface/datasets/pull/1577 | 767,342,432 | MDExOlB1bGxSZXF1ZXN0NTQwMDg2MzY5 | 1,577 | Add comet metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4",
"events_url": "https://api.github.com/users/ricardorei/events{/privacy}",
"followers_url": "https://api.github.com/users/ricardorei/followers",
"following_url": "https://api.github.com/users/ricardorei/following{/other_user}",
"gists_url": "https://api.github.com/users/ricardorei/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ricardorei",
"id": 17256847,
"login": "ricardorei",
"node_id": "MDQ6VXNlcjE3MjU2ODQ3",
"organizations_url": "https://api.github.com/users/ricardorei/orgs",
"received_events_url": "https://api.github.com/users/ricardorei/received_events",
"repos_url": "https://api.github.com/users/ricardorei/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ricardorei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ricardorei/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ricardorei"
} | [] | closed | false | null | [] | null | [
"I also thought a bit about the fact that \"sources\" can't be added to the batch.. but changing that would require a lot more changes. And I agree that the idea of adding them as part of the references is not ideal. Conceptually they are not references.\r\n\r\nI would keep it like this for now.. And in the future, work on a more consistent batch interface."
] | "2020-12-15T08:56:00Z" | "2021-01-14T13:33:10Z" | "2021-01-14T13:33:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1577.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1577",
"merged_at": "2021-01-14T13:33:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1577.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1577"
} | Hey! I decided to add our new Crosslingual Optimized Metric for Evaluation of Translation (COMET) to the list of the available metrics.
COMET was [presented at EMNLP20](https://www.aclweb.org/anthology/2020.emnlp-main.213/) and it is the highest performing metric, so far, on the WMT19 benchmark.
We also participated in the [WMT20 Metrics shared task ](http://www.statmt.org/wmt20/pdf/2020.wmt-1.101.pdf) where once again COMET was validated as a top-performing metric.
I hope that this metric will help researcher's and industry workers to better validate their MT systems in the future 🤗 !
Cheers,
Ricardo
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1577/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1577/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4565 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4565/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4565/comments | https://api.github.com/repos/huggingface/datasets/issues/4565/events | https://github.com/huggingface/datasets/issues/4565 | 1,284,141,666 | I_kwDODunzps5MinJi | 4,565 | Add UFSC OCPap dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4",
"events_url": "https://api.github.com/users/johnnv1/events{/privacy}",
"followers_url": "https://api.github.com/users/johnnv1/followers",
"following_url": "https://api.github.com/users/johnnv1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/johnnv1",
"id": 20444345,
"login": "johnnv1",
"node_id": "MDQ6VXNlcjIwNDQ0MzQ1",
"organizations_url": "https://api.github.com/users/johnnv1/orgs",
"received_events_url": "https://api.github.com/users/johnnv1/received_events",
"repos_url": "https://api.github.com/users/johnnv1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/johnnv1"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [
"I will add this directly on the hub (same as #4486)—in https://huggingface.co/lapix"
] | "2022-06-24T20:07:54Z" | "2022-07-06T19:03:02Z" | "2022-07-06T19:03:02Z" | NONE | null | null | null | ## Adding a Dataset
- **Name:** UFSC OCPap: Papanicolaou Stained Oral Cytology Dataset (v4)
- **Description:** The UFSC OCPap dataset comprises 9,797 labeled images of 1200x1600 pixels acquired from 5 slides of cancer diagnosed and 3 healthy of oral brush samples, from distinct patients.
- **Paper:** https://dx.doi.org/10.2139/ssrn.4119212
- **Data:** https://data.mendeley.com/datasets/dr7ydy9xbk/1
- **Motivation:** real data of pap stained oral cytology samples
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4565/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4565/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2022/comments | https://api.github.com/repos/huggingface/datasets/issues/2022/events | https://github.com/huggingface/datasets/issues/2022 | 827,435,033 | MDU6SXNzdWU4Mjc0MzUwMzM= | 2,022 | ValueError when rename_column on splitted dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4",
"events_url": "https://api.github.com/users/simonschoe/events{/privacy}",
"followers_url": "https://api.github.com/users/simonschoe/followers",
"following_url": "https://api.github.com/users/simonschoe/following{/other_user}",
"gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/simonschoe",
"id": 53626067,
"login": "simonschoe",
"node_id": "MDQ6VXNlcjUzNjI2MDY3",
"organizations_url": "https://api.github.com/users/simonschoe/orgs",
"received_events_url": "https://api.github.com/users/simonschoe/received_events",
"repos_url": "https://api.github.com/users/simonschoe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/simonschoe"
} | [] | closed | false | null | [] | null | [
"Hi,\r\n\r\nThis is a bug so thanks for reporting it. `Dataset.__setstate__` is the problem, which is called when `Dataset.rename_column` tries to copy the dataset with `copy.deepcopy(self)`. This only happens if the `split` arg in `load_dataset` was defined as `ReadInstruction`.\r\n\r\nTo overcome this issue, use the named splits API (for now):\r\n```python\r\ntrain_ds, test_ds = load_dataset(\r\n path='csv', \r\n delimiter='\\t', \r\n data_files=text_files, \r\n split=['train[:90%]', 'train[-10%:]'],\r\n)\r\n\r\ntrain_ds = train_ds.rename_column('sentence', 'text')\r\n```",
"This has been fixed in #2043 , thanks @mariosasko \r\nThe fix is available on master and we'll do a new release soon :)\r\n\r\nfeel free to re-open if you still have issues"
] | "2021-03-10T09:40:38Z" | "2021-03-16T14:06:08Z" | "2021-03-16T14:05:05Z" | NONE | null | null | null | Hi there,
I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so:
```python
split = {
'train': ReadInstruction('train', to=90, unit='%'),
'test': ReadInstruction('train', from_=-10, unit='%')
}
dataset = load_dataset(
path='csv', # use 'text' loading script to load from local txt-files
delimiter='\t', # xxx
data_files=text_files, # list of paths to local text files
split=split, # xxx
)
dataset
```
Part of output:
```python
DatasetDict({
train: Dataset({
features: ['sentence', 'sentiment'],
num_rows: 900
})
test: Dataset({
features: ['sentence', 'sentiment'],
num_rows: 100
})
})
```
Afterwards I'd like to rename the 'sentence' column to 'text' in order to be compatible with my modelin pipeline. If I run the following code I experience a `ValueError` however:
```python
dataset['train'].rename_column('sentence', 'text')
```
```python
/usr/local/lib/python3.7/dist-packages/datasets/splits.py in __init__(self, name)
353 for split_name in split_names_from_instruction:
354 if not re.match(_split_re, split_name):
--> 355 raise ValueError(f"Split name should match '{_split_re}'' but got '{split_name}'.")
356
357 def __str__(self):
ValueError: Split name should match '^\w+(\.\w+)*$'' but got 'ReadInstruction('.
```
In particular, these behavior does not arise if I use the deprecated `rename_column_` method. Any idea what causes the error? Would assume something in the way I defined the split.
Thanks in advance! :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2022/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2022/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3153/comments | https://api.github.com/repos/huggingface/datasets/issues/3153/events | https://github.com/huggingface/datasets/pull/3153 | 1,034,179,198 | PR_kwDODunzps4tlEVE | 3,153 | Add TER (as implemented in sacrebleu) | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy"
} | [] | closed | false | null | [] | null | [
"The problem appears to stem from the omission of the lines that you mentioned. If you add them back and try examples from [this](https://huggingface.co/docs/datasets/using_metrics.html) tutorial (sacrebleu metric example) the code you implemented works fine.\r\n\r\nI think the purpose of these lines is follows:\r\n\r\n1. Sacrebleu metrics confusingly expect a nested list of strings when you have just one reference for each hypothesis (i.e. `[[\"example1\", \"example2\", \"example3]]`), while for cases with more than one reference a _nested list of lists of strings_ (i.e. `[[\"ref1a\", \"ref1b\"], [\"ref2a\", \"ref2b\"], [\"ref3a\", \"ref3b\"]]`) is expected instead. So `transformed_references` line outputs the required single reference format for sacrebleu's ter implementation which you can't pass directly to `compute`.\r\n2. I'm assuming that an additional check is also related to that confusing format with one/many references, because it's really difficult to tell what exactly you're doing wrong if you're not aware of that issue."
] | "2021-10-23T14:26:45Z" | "2021-11-02T11:04:11Z" | "2021-11-02T11:04:11Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3153.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3153",
"merged_at": "2021-11-02T11:04:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3153.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3153"
} | Implements TER (Translation Edit Rate) as per its implementation in sacrebleu. Sacrebleu for BLEU scores is already implemented in `datasets` so I thought this would be a nice addition.
I started from the sacrebleu implementation, as the two metrics have a lot in common.
Verified with sacrebleu's [testing suite](https://github.com/mjpost/sacrebleu/blob/078c440168c6adc89ba75fe6d63f0d922d42bcfe/test/test_ter.py) that this indeed works as intended.
```python
import datasets
test_cases = [
(['aaaa bbbb cccc dddd'], ['aaaa bbbb cccc dddd'], 0), # perfect match
(['dddd eeee ffff'], ['aaaa bbbb cccc'], 1), # no overlap
([''], ['a'], 1), # corner case, empty hypothesis
(['d e f g h a b c'], ['a b c d e f g h'], 1 / 8), # a single shift fixes MT
(
[
'wählen Sie " Bild neu berechnen , " um beim Ändern der Bildgröße Pixel hinzuzufügen oder zu entfernen , damit das Bild ungefähr dieselbe Größe aufweist wie die andere Größe .',
'wenn Sie alle Aufgaben im aktuellen Dokument aktualisieren möchten , wählen Sie im Menü des Aufgabenbedienfelds die Option " Alle Aufgaben aktualisieren . "',
'klicken Sie auf der Registerkarte " Optionen " auf die Schaltfläche " Benutzerdefiniert " und geben Sie Werte für " Fehlerkorrektur-Level " und " Y / X-Verhältnis " ein .',
'Sie können beispielsweise ein Dokument erstellen , das ein Auto über die Bühne enthält .',
'wählen Sie im Dialogfeld " Neu aus Vorlage " eine Vorlage aus und klicken Sie auf " Neu . "',
],
[
'wählen Sie " Bild neu berechnen , " um beim Ändern der Bildgröße Pixel hinzuzufügen oder zu entfernen , damit die Darstellung des Bildes in einer anderen Größe beibehalten wird .',
'wenn Sie alle Aufgaben im aktuellen Dokument aktualisieren möchten , wählen Sie im Menü des Aufgabenbedienfelds die Option " Alle Aufgaben aktualisieren . "',
'klicken Sie auf der Registerkarte " Optionen " auf die Schaltfläche " Benutzerdefiniert " und geben Sie für " Fehlerkorrektur-Level " und " Y / X-Verhältnis " niedrigere Werte ein .',
'Sie können beispielsweise ein Dokument erstellen , das ein Auto enthalt , das sich über die Bühne bewegt .',
'wählen Sie im Dialogfeld " Neu aus Vorlage " eine Vorlage aus und klicken Sie auf " Neu . "',
],
0.136 # realistic example from WMT dev data (2019)
),
]
ter = datasets.load_metric(r"path\to\datasets\metrics\ter")
predictions = ["hello there general kenobi", "foo bar foobar"]
references = [["hello there general kenobi", "hello there !"], ["foo bar foobar", "foo bar foobar"]]
print(ter.compute(predictions=predictions, references=references))
for hyp, ref, score in test_cases:
# Note the reference transformation which is different from scarebleu's input format
results = ter.compute(predictions=hyp, references=[[r] for r in ref])
assert 100*score == results["score"], f"expected {100*score}, got {results['score']}"
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3153/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3153/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4039/comments | https://api.github.com/repos/huggingface/datasets/issues/4039/events | https://github.com/huggingface/datasets/pull/4039 | 1,183,468,927 | PR_kwDODunzps41KFIf | 4,039 | Support streaming xcopa dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-03-28T13:45:55Z" | "2022-03-28T16:26:48Z" | "2022-03-28T16:21:46Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4039.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4039",
"merged_at": "2022-03-28T16:21:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4039.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4039"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4039/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4039/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3953 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3953/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3953/comments | https://api.github.com/repos/huggingface/datasets/issues/3953/events | https://github.com/huggingface/datasets/issues/3953 | 1,172,123,736 | I_kwDODunzps5F3TBY | 3,953 | Add ImageNet Sketch | {
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NielsRogge",
"id": 48327001,
"login": "NielsRogge",
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NielsRogge"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] | closed | false | null | [] | null | [
"Can you assign this task to me? @nreimers @mariosasko ",
"Hi! Sure! Let us know if you need any pointers."
] | "2022-03-17T09:20:31Z" | "2022-05-23T18:05:29Z" | "2022-05-23T18:05:29Z" | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** ImageNet Sketch
- **Description:** ImageNet-Sketch is a dataset consisting of sketch-like images, that matches the ImageNet classification validation set in categories and scale.
- **Paper:** [Learning Robust Global Representations by Penalizing Local Predictive Power](https://arxiv.org/abs/1905.13549)
- **Data:** https://github.com/HaohanWang/ImageNet-Sketch
- **Motivation:** Allows for evaluating the robustness of vision models.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3953/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3953/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1585 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1585/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1585/comments | https://api.github.com/repos/huggingface/datasets/issues/1585/events | https://github.com/huggingface/datasets/issues/1585 | 768,831,171 | MDU6SXNzdWU3Njg4MzExNzE= | 1,585 | FileNotFoundError for `amazon_polarity` | {
"avatar_url": "https://avatars.githubusercontent.com/u/24647404?v=4",
"events_url": "https://api.github.com/users/phtephanx/events{/privacy}",
"followers_url": "https://api.github.com/users/phtephanx/followers",
"following_url": "https://api.github.com/users/phtephanx/following{/other_user}",
"gists_url": "https://api.github.com/users/phtephanx/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/phtephanx",
"id": 24647404,
"login": "phtephanx",
"node_id": "MDQ6VXNlcjI0NjQ3NDA0",
"organizations_url": "https://api.github.com/users/phtephanx/orgs",
"received_events_url": "https://api.github.com/users/phtephanx/received_events",
"repos_url": "https://api.github.com/users/phtephanx/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/phtephanx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phtephanx/subscriptions",
"type": "User",
"url": "https://api.github.com/users/phtephanx"
} | [] | closed | false | null | [] | null | [
"Hi @phtephanx , the `amazon_polarity` dataset has not been released yet. It will be available in the coming soon v2of `datasets` :) \r\n\r\nYou can still access it now if you want, but you will need to install datasets via the master branch:\r\n`pip install git+https://github.com/huggingface/datasets.git@master`"
] | "2020-12-16T12:51:05Z" | "2020-12-16T16:02:56Z" | "2020-12-16T16:02:56Z" | NONE | null | null | null | Version: `datasets==v1.1.3`
### Reproduction
```python
from datasets import load_dataset
data = load_dataset("amazon_polarity")
```
crashes with
```bash
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py
```
and
```bash
FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py
```
and
```bash
FileNotFoundError: Couldn't find file locally at amazon_polarity/amazon_polarity.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1585/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1585/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5962 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5962/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5962/comments | https://api.github.com/repos/huggingface/datasets/issues/5962/events | https://github.com/huggingface/datasets/issues/5962 | 1,761,589,882 | I_kwDODunzps5o_7p6 | 5,962 | Issue with train_test_split maintaining the same underlying PyArrow Table | {
"avatar_url": "https://avatars.githubusercontent.com/u/70730520?v=4",
"events_url": "https://api.github.com/users/Oziel14/events{/privacy}",
"followers_url": "https://api.github.com/users/Oziel14/followers",
"following_url": "https://api.github.com/users/Oziel14/following{/other_user}",
"gists_url": "https://api.github.com/users/Oziel14/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Oziel14",
"id": 70730520,
"login": "Oziel14",
"node_id": "MDQ6VXNlcjcwNzMwNTIw",
"organizations_url": "https://api.github.com/users/Oziel14/orgs",
"received_events_url": "https://api.github.com/users/Oziel14/received_events",
"repos_url": "https://api.github.com/users/Oziel14/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Oziel14/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oziel14/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Oziel14"
} | [] | open | false | null | [] | null | [] | "2023-06-17T02:19:58Z" | "2023-06-17T02:19:58Z" | null | NONE | null | null | null | ### Describe the bug
I've been using the train_test_split method in the datasets module to split my HuggingFace Dataset into separate training, validation, and testing subsets. However, I've noticed an issue where the split datasets appear to maintain the same underlying PyArrow Table.
### Steps to reproduce the bug
1. Load any dataset ```dataset = load_dataset("lhoestq/demo1")```
2. Try the next code:
```python
from datasets import Dataset, DatasetDict
train_size = 0.6
split_train = dataset["train"].train_test_split(
train_size=train_size,
)
separate_dataset_dict = DatasetDict({
"train": split_train["train"],
"test": split_train["test"],
})
```
3. The next code ```print(separate_dataset_dict)``` when printing the dataset it gives the indication that they have 3 and 2 rows respectively.
4. But the next code:
```python
print(len(separate_dataset_dict["train"].data['id']))
print(len(separate_dataset_dict["test"].data['id']))
```
Indicates that both tables still have 5 rows.
### Expected behavior
However, I've noticed that train_test_split["train"].data, test_val_split["train"].data, and test_val_split["test"].data are identical, suggesting that they all point to the same underlying PyArrow Table. This means that the split datasets are not independent, as I expected.
I believe this is a bug in the train_test_split implementation, as I would expect this function to return datasets with separate underlying PyArrow Tables. Could you please help me understand if this is expected behavior, or if there's a workaround to create truly independent split datasets?
I would appreciate any assistance with this issue. Thank you.
### Environment info
I tried in Colab:
- `datasets` version: 2.13.0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
and my PC:
- `datasets` version: 2.13.0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5962/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5962/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4514 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4514/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4514/comments | https://api.github.com/repos/huggingface/datasets/issues/4514/events | https://github.com/huggingface/datasets/issues/4514 | 1,273,505,230 | I_kwDODunzps5L6CXO | 4,514 | Allow .JPEG as a file extension | {
"avatar_url": "https://avatars.githubusercontent.com/u/34550289?v=4",
"events_url": "https://api.github.com/users/DiGyt/events{/privacy}",
"followers_url": "https://api.github.com/users/DiGyt/followers",
"following_url": "https://api.github.com/users/DiGyt/following{/other_user}",
"gists_url": "https://api.github.com/users/DiGyt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DiGyt",
"id": 34550289,
"login": "DiGyt",
"node_id": "MDQ6VXNlcjM0NTUwMjg5",
"organizations_url": "https://api.github.com/users/DiGyt/orgs",
"received_events_url": "https://api.github.com/users/DiGyt/received_events",
"repos_url": "https://api.github.com/users/DiGyt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DiGyt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DiGyt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DiGyt"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi, thanks for reporting! I've opened a PR with the fix.",
"Wow, that was quick! Thank you very much 🙏 "
] | "2022-06-16T12:36:20Z" | "2022-06-20T08:18:46Z" | "2022-06-16T17:11:40Z" | NONE | null | null | null | ## Describe the bug
When loading image data, HF datasets seems to recognize `.jpg` and `.jpeg` file extensions, but not e.g. .JPEG. As the naming convention .JPEG is used in important datasets such as imagenet, I would welcome if according extensions like .JPEG or .JPG would be allowed.
## Steps to reproduce the bug
```python
# use bash to create 2 sham datasets with jpeg and JPEG ext
!mkdir dataset_a
!mkdir dataset_b
!wget https://upload.wikimedia.org/wikipedia/commons/7/71/Dsc_%28179253513%29.jpeg -O example_img.jpeg
!cp example_img.jpeg ./dataset_a/
!mv example_img.jpeg ./dataset_b/example_img.JPEG
from datasets import load_dataset
# working
df1 = load_dataset("./dataset_a", ignore_verifications=True)
#not working
df2 = load_dataset("./dataset_b", ignore_verifications=True)
# show
print(df1, df2)
```
## Expected results
```
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 1
})
}) DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 1
})
})
```
## Actual results
```
FileNotFoundError: Unable to resolve any data file that matches '['**']' at /..PATH../dataset_b with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']
```
I know that it can be annoying to allow seemingly arbitrary numbers of file extensions. But I think this one would be really welcome. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4514/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4514/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5036 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5036/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5036/comments | https://api.github.com/repos/huggingface/datasets/issues/5036/events | https://github.com/huggingface/datasets/pull/5036 | 1,389,094,075 | PR_kwDODunzps4_w8Bs | 5,036 | Add oversampling strategy iterable datasets interleave | {
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ylacombe",
"id": 52246514,
"login": "ylacombe",
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ylacombe"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-09-28T10:10:23Z" | "2022-09-30T12:30:48Z" | "2022-09-30T12:28:23Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5036.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5036",
"merged_at": "2022-09-30T12:28:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5036.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5036"
} | Hello everyone,
Following the issue #4893 and the PR #4831, I propose here an oversampling strategy for a `IterableDataset` list.
The `all_exhausted` strategy stops building the new dataset as soon as all samples in each dataset have been added at least once.
It follows roughly the same logic behind #4831, namely:
- if ``probabilities`` is `None` and the strategy is `all_exhausted`, it simply performs a round robin interleaving that stops when the longest dataset is out of samples. Here the new dataset length will be $maxLengthDataset*nbDataset$.
- if ``probabilities`` is not `None` and the strategy is `all_exhausted`, it keeps trace of the datasets which were out of samples but continues to add them to the new dataset, and stops as soons as every dataset runs out of samples at least once.
In order to be consistent and also to align with the `Dataset` behavior, please note that the behavior of the default strategy (`first_exhausted`) has been changed. Namely, it really stops when a dataset is out of samples whereas it used to stop when receiving the `StopIteration` error.
To give an example of the last note, with the following snippet:
```
>>> from tests.test_iterable_dataset import *
>>> d1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [0, 1, 2]])), {}))
>>> d2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [10, 11, 12, 13]])), {}))
>>> d3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [20, 21, 22, 23, 24]])), {}))
>>> dataset = interleave_datasets([d1, d2, d3])
>>> [x["a"] for x in dataset]
```
The result here will then be `[10, 0, 11, 1, 2]` instead of `[10, 0, 11, 1, 2, 20, 12, 13]`.
I modified the behavior because I found it to be consistent with the under/oversampling approach and because it unified the undersampling and oversampling code, but I stay open to any suggestions.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5036/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5036/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2840/comments | https://api.github.com/repos/huggingface/datasets/issues/2840/events | https://github.com/huggingface/datasets/issues/2840 | 980,489,074 | MDU6SXNzdWU5ODA0ODkwNzQ= | 2,840 | How can I compute BLEU-4 score use `load_metric` ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/26213546?v=4",
"events_url": "https://api.github.com/users/Doragd/events{/privacy}",
"followers_url": "https://api.github.com/users/Doragd/followers",
"following_url": "https://api.github.com/users/Doragd/following{/other_user}",
"gists_url": "https://api.github.com/users/Doragd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Doragd",
"id": 26213546,
"login": "Doragd",
"node_id": "MDQ6VXNlcjI2MjEzNTQ2",
"organizations_url": "https://api.github.com/users/Doragd/orgs",
"received_events_url": "https://api.github.com/users/Doragd/received_events",
"repos_url": "https://api.github.com/users/Doragd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Doragd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Doragd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Doragd"
} | [] | closed | false | null | [] | null | [] | "2021-08-26T17:36:37Z" | "2021-08-27T08:13:24Z" | "2021-08-27T08:13:24Z" | NONE | null | null | null | I have found the sacrebleu metric. But, I do not know the difference between it and BLEU-4.
If I want to compute BLEU-4 score, what can i do? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2840/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2840/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3770/comments | https://api.github.com/repos/huggingface/datasets/issues/3770/events | https://github.com/huggingface/datasets/issues/3770 | 1,146,336,667 | I_kwDODunzps5EU7Wb | 3,770 | DuplicatedKeysError on msr_sqa dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/9049591?v=4",
"events_url": "https://api.github.com/users/kolk/events{/privacy}",
"followers_url": "https://api.github.com/users/kolk/followers",
"following_url": "https://api.github.com/users/kolk/following{/other_user}",
"gists_url": "https://api.github.com/users/kolk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kolk",
"id": 9049591,
"login": "kolk",
"node_id": "MDQ6VXNlcjkwNDk1OTE=",
"organizations_url": "https://api.github.com/users/kolk/orgs",
"received_events_url": "https://api.github.com/users/kolk/received_events",
"repos_url": "https://api.github.com/users/kolk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kolk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kolk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kolk"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting, @kolk.\r\n\r\nWe are fixing it. "
] | "2022-02-22T00:43:33Z" | "2022-02-22T08:12:39Z" | "2022-02-22T08:12:39Z" | NONE | null | null | null | ### Describe the bug
Failure to generate dataset msr_sqa because of duplicate keys.
### Steps to reproduce the bug
```
from datasets import load_dataset
load_dataset("msr_sqa")
```
### Expected results
The examples keys should be unique.
**Actual results**
```
>>> load_dataset("msr_sqa")
Downloading:
6.72k/? [00:00<00:00, 148kB/s]
Downloading:
2.93k/? [00:00<00:00, 53.8kB/s]
Using custom data configuration default
Downloading and preparing dataset msr_sqa/default (download: 4.57 MiB, generated: 26.25 MiB, post-processed: Unknown size, total: 30.83 MiB) to /root/.cache/huggingface/datasets/msr_sqa/default/0.0.0/70b2a497bd3cc8fc960a3557d2bad1eac5edde824505e15c9c8ebe4c260fd4d1...
Downloading: 100%
4.80M/4.80M [00:00<00:00, 7.49MB/s]
---------------------------------------------------------------------------
DuplicatedKeysError Traceback (most recent call last)
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator)
1080 example = self.info.features.encode_example(record)
-> 1081 writer.write(example, key)
1082 finally:
8 frames
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: nt-639
Keys should be unique and deterministic in nature
During handling of the above exception, another exception occurred:
DuplicatedKeysError Traceback (most recent call last)
[/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in check_duplicate_keys(self)
449 for hash, key in self.hkey_record:
450 if hash in tmp_record:
--> 451 raise DuplicatedKeysError(key)
452 else:
453 tmp_record.add(hash)
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: nt-639
Keys should be unique and deterministic in nature
```
### Environment info
datasets version: 1.18.3
Platform: Google colab notebook
Python version: 3.7
PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3770/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3770/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4553 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4553/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4553/comments | https://api.github.com/repos/huggingface/datasets/issues/4553/events | https://github.com/huggingface/datasets/pull/4553 | 1,282,779,560 | PR_kwDODunzps46Q1q7 | 4,553 | Stop dropping columns in to_tf_dataset() before we load batches | {
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rocketknight1",
"id": 12866554,
"login": "Rocketknight1",
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rocketknight1"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq Rebasing fixed the test failures, so this should be ready to review now! There's still a failure on Win but it seems unrelated.",
"Gentle ping @lhoestq ! This is a simple fix (dropping columns after loading a batch from the dataset rather than with `.remove_columns()` to make sure we don't break transforms), and tests are green so we're ready for review!",
"@lhoestq Test is in!"
] | "2022-06-23T18:21:05Z" | "2022-07-04T19:00:13Z" | "2022-07-04T18:49:01Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4553.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4553",
"merged_at": "2022-07-04T18:49:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4553.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4553"
} | `to_tf_dataset()` dropped unnecessary columns before loading batches from the dataset, but this is causing problems when using a transform, because the dropped columns might be needed to compute the transform. Since there's no real way to check which columns the transform might need, we skip dropping columns and instead drop keys from the batch after we load it.
cc @amyeroberts and https://github.com/huggingface/notebooks/pull/202 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4553/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4553/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1876/comments | https://api.github.com/repos/huggingface/datasets/issues/1876/events | https://github.com/huggingface/datasets/issues/1876 | 808,025,859 | MDU6SXNzdWU4MDgwMjU4NTk= | 1,876 | load_dataset("multi_woz_v22") NonMatchingChecksumError | {
"avatar_url": "https://avatars.githubusercontent.com/u/5945326?v=4",
"events_url": "https://api.github.com/users/Vincent950129/events{/privacy}",
"followers_url": "https://api.github.com/users/Vincent950129/followers",
"following_url": "https://api.github.com/users/Vincent950129/following{/other_user}",
"gists_url": "https://api.github.com/users/Vincent950129/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Vincent950129",
"id": 5945326,
"login": "Vincent950129",
"node_id": "MDQ6VXNlcjU5NDUzMjY=",
"organizations_url": "https://api.github.com/users/Vincent950129/orgs",
"received_events_url": "https://api.github.com/users/Vincent950129/received_events",
"repos_url": "https://api.github.com/users/Vincent950129/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Vincent950129/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vincent950129/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Vincent950129"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting !\r\nThis is due to the changes made in the data files in the multiwoz repo: https://github.com/budzianowski/multiwoz/pull/59\r\nI'm opening a PR to update the checksums of the data files.",
"I just merged the fix. It will be available in the new release of `datasets` later today.\r\nYou'll be able to get the new version with\r\n```\r\npip install --upgrade datasets\r\n```",
"Hi, I still meet the error when loading the datasets after upgradeing datasets.\r\n\r\nraise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json']",
"This must be related to https://github.com/budzianowski/multiwoz/pull/72\r\nThose files have changed, let me update the checksums for this dataset.\r\n\r\nFor now you can use `ignore_verifications=True` in `load_dataset` to skip the checksum verification."
] | "2021-02-14T19:14:48Z" | "2021-08-04T18:08:00Z" | "2021-08-04T18:08:00Z" | NONE | null | null | null | Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError.
To reproduce:
`dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')`
This will give the following error:
```
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_003.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_004.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_005.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_006.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_007.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_008.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_009.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_010.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_012.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_013.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_014.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_015.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_016.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_017.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_002.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_002.json']
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1876/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1876/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2349/comments | https://api.github.com/repos/huggingface/datasets/issues/2349/events | https://github.com/huggingface/datasets/pull/2349 | 888,586,018 | MDExOlB1bGxSZXF1ZXN0NjQxNzYzNzg3 | 2,349 | Update task_ids for Ascent KB | {
"avatar_url": "https://avatars.githubusercontent.com/u/6749421?v=4",
"events_url": "https://api.github.com/users/phongnt570/events{/privacy}",
"followers_url": "https://api.github.com/users/phongnt570/followers",
"following_url": "https://api.github.com/users/phongnt570/following{/other_user}",
"gists_url": "https://api.github.com/users/phongnt570/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/phongnt570",
"id": 6749421,
"login": "phongnt570",
"node_id": "MDQ6VXNlcjY3NDk0MjE=",
"organizations_url": "https://api.github.com/users/phongnt570/orgs",
"received_events_url": "https://api.github.com/users/phongnt570/received_events",
"repos_url": "https://api.github.com/users/phongnt570/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/phongnt570/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phongnt570/subscriptions",
"type": "User",
"url": "https://api.github.com/users/phongnt570"
} | [] | closed | false | null | [] | null | [] | "2021-05-11T20:44:33Z" | "2021-05-17T10:53:14Z" | "2021-05-17T10:48:34Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2349.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2349",
"merged_at": "2021-05-17T10:48:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2349.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2349"
} | This "other-other-knowledge-base" task is better suited for the dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2349/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2349/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5031 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5031/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5031/comments | https://api.github.com/repos/huggingface/datasets/issues/5031/events | https://github.com/huggingface/datasets/pull/5031 | 1,388,201,146 | PR_kwDODunzps4_t82_ | 5,031 | Support hfh 0.10 implicit auth | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq it is now released so you can move forward with it :) ",
"I took your comments into account @Wauplin :)\r\nI also bumped the requirement to 0.2.0 because we're using `set_access_token`\r\n\r\ncc @albertvillanova WDYT ? I edited the CI job to also check for our minimum supported version of hfh at the same time as the minimum pyarrow version",
"@lhoestq great, thanks ! :)"
] | "2022-09-27T18:37:49Z" | "2022-09-30T09:18:24Z" | "2022-09-30T09:15:59Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5031.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5031",
"merged_at": "2022-09-30T09:15:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5031.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5031"
} | In huggingface-hub 0.10 the `token` parameter is deprecated for dataset_info and list_repo_files in favor of use_auth_token.
Moreover if use_auth_token=None then the user's token is used implicitly.
I took those two changes into account
Close https://github.com/huggingface/datasets/issues/4990
TODO:
- [x] fix tests
We should wait hfh 0.10 to be relased first to make sure it works correctly before merging | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5031/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5031/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2197/comments | https://api.github.com/repos/huggingface/datasets/issues/2197/events | https://github.com/huggingface/datasets/pull/2197 | 854,356,559 | MDExOlB1bGxSZXF1ZXN0NjEyMzEzMzQw | 2,197 | fix missing indices_files in load_form_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-04-09T09:37:57Z" | "2021-04-09T09:54:40Z" | "2021-04-09T09:54:39Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2197.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2197",
"merged_at": "2021-04-09T09:54:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2197.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2197"
} | This should fix #2195
`load_from_disk` was failing if there was no "_indices_files" field in state.json. This can happen if the dataset has no indices mapping | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2197/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2197/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6214/comments | https://api.github.com/repos/huggingface/datasets/issues/6214/events | https://github.com/huggingface/datasets/issues/6214 | 1,881,736,469 | I_kwDODunzps5wKQUV | 6,214 | Unpin fsspec < 2023.9.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
] | null | [] | "2023-09-05T11:02:58Z" | "2023-09-26T15:32:52Z" | "2023-09-26T15:32:52Z" | MEMBER | null | null | null | Once root issue is fixed, remove temporary pin of fsspec < 2023.9.0 introduced by:
- #6210
Related to issue:
- #6209
After investigation, I think the root issue is related to the new glob behavior with double asterisk `**` they have introduced in:
- https://github.com/fsspec/filesystem_spec/pull/1329 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6214/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6214/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5768 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5768/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5768/comments | https://api.github.com/repos/huggingface/datasets/issues/5768/events | https://github.com/huggingface/datasets/issues/5768 | 1,672,494,561 | I_kwDODunzps5jsD3h | 5,768 | load_dataset("squad") doesn't work in 2.7.1 and 2.10.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/57412770?v=4",
"events_url": "https://api.github.com/users/yaseen157/events{/privacy}",
"followers_url": "https://api.github.com/users/yaseen157/followers",
"following_url": "https://api.github.com/users/yaseen157/following{/other_user}",
"gists_url": "https://api.github.com/users/yaseen157/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yaseen157",
"id": 57412770,
"login": "yaseen157",
"node_id": "MDQ6VXNlcjU3NDEyNzcw",
"organizations_url": "https://api.github.com/users/yaseen157/orgs",
"received_events_url": "https://api.github.com/users/yaseen157/received_events",
"repos_url": "https://api.github.com/users/yaseen157/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yaseen157/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaseen157/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yaseen157"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting, @yaseen157.\r\n\r\nCould you please give the complete error stack trace?",
"I am not able to reproduce your issue: the dataset loads perfectly on my local machine and on a Colab notebook: https://colab.research.google.com/drive/1Fbdoa1JdNz8DOdX6gmIsOK1nCT8Abj4O?usp=sharing\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"squad\")\r\nDownloading builder script: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.27k/5.27k [00:00<00:00, 3.22MB/s]\r\nDownloading metadata: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.36k/2.36k [00:00<00:00, 1.60MB/s]\r\nDownloading readme: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.67k/7.67k [00:00<00:00, 4.58MB/s]\r\nDownloading and preparing dataset squad/plain_text to ...t/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...\r\nDownloading data: 30.3MB [00:00, 91.8MB/s] | 0/2 [00:00<?, ?it/s]\r\nDownloading data: 4.85MB [00:00, 75.3MB/s] \r\nDownloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.31it/s]\r\nExtracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2157.01it/s]\r\nDataset squad downloaded and prepared to .../.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 463.95it/s]\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 87599\r\n })\r\n validation: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 10570\r\n })\r\n})\r\n```",
"I am at a complete loss for what's happening here. A quick summary, I have 3 machines to try this with:\r\n1) My windows 10 laptop\r\n2) Linux machine1, super computer login node\r\n3) Linux machine2, super computer compute node\r\n\r\nLet's define the following as a test script for the machines:\r\n\r\n```\r\nimport traceback\r\nimport datasets\r\nprint(f\"{datasets.__version__=}\")\r\ntry:\r\n ds = datasets.load_dataset(\"squad\")\r\nexcept:\r\n traceback.print_exc()\r\nelse:\r\n print(\"Success!\")\r\n```\r\n\r\nThe Windows laptop enters some sort of traceback recursion loop:\r\n\r\n> datasets.__version__='2.7.1'\r\n> Downloading and preparing dataset squad/plain_text to C:/Users/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...\r\n> Downloading data files: 100%|██████████| 2/2 [00:00<?, ?it/s]\r\n> Traceback (most recent call last):\r\n> File \"<string>\", line 1, in <module>\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\r\n> exitcode = _main(fd, parent_sentinel)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 125, in _main\r\n> prepare(preparation_data)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 236, in prepare\r\n> _fixup_main_from_path(data['init_main_from_path'])\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 287, in _fixup_main_from_path\r\n> main_content = runpy.run_path(main_path,\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 267, in run_path\r\n> code, fname = _get_code_from_file(run_name, path_name)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 237, in _get_code_from_file\r\n> with io.open_code(decoded_path) as f:\r\n> OSError: [Errno 22] Invalid argument: 'C:\\\\Users\\\\yr3g17\\\\OneDrive - University of Southampton\\\\Documents\\\\PhD-repository\\\\<input>'\r\n> Traceback (most recent call last):\r\n> File \"<string>\", line 1, in <module>\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\r\n> exitcode = _main(fd, parent_sentinel)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 125, in _main\r\n> prepare(preparation_data)\r\n**this error traceback is endlessly recursive**\r\n\r\nThis is a brand new issue that started today and I didn't even realise was a thing, as I had been using my windows machine to follow tracebacks for the other machines...\r\n\r\nI suspect this issue had something to do with my filepath naming, but I couldn't confirm this when I spent time trying to debug this myself weeks ago, something to do with files being locked and never released. I'm not too concerned about my laptop not working here because I've had so many issues with Microsoft OneDrive and my filesystem.\r\n\r\nLinux machines 1 and 2 were working fine for months, but have all of a sudden stopped working. Trying to run linux machine 1 (login node), I get:\r\n\r\n> datasets.__version__='2.10.1'\r\n> Downloading and preparing dataset json/squad to /home/yr3g17/.cache/hugg\r\ningface/datasets/json/squad-d733af945be1d2c2/0.0.0/0f7e3662623656454fcd2\r\nb650f34e886a7db4b9104504885bd462096cc7a9f51...\r\n> Downloading data files: 100%|███████████████████████████████████████████\r\n█████████████████████████████████████████████| 2/2 [00:00<00:00, 4042.70\r\nit/s]\r\n>Extracting data files: 100%|███████████████████████████████████████\r\n███████████████████████████████████████████████████| 2/2 [00:00<00:00, 1\r\n11.15it/s]\r\n> Generating train split: 0 examples [00:00, ? examples/s]\r\n\r\n and hangs here. This has not happened to me before on the Linux machine. If I forcefully keyboard interrupt, I get:\r\n \r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 2, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/load.py\", line 1782, in load_dataset\r\n> builder_instance.download_and_prepare(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/builder.py\", line 793, in download_and_prepare\r\n> with FileLock(lock_path) if is_local else contextlib.nullcontext():\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/utils/filelock.py\", line 320, in __enter__\r\n> self.acquire()\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/utils/filelock.py\", line 282, in acquire\r\n> time.sleep(poll_intervall)\r\n\r\nWhich also appears to be file lock related! I resolved this by navigating to my ~/.cache/huggingface/datasets directory and wiping out anything to do with the squad dataset in *.lock files. Now I get:\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset_load(\"squad\")\r\n\r\n```\r\n> Downloading and preparing dataset squad/plain_text to /home/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb\r\n> 2511d223b9150cce08a837ef62ffea453...\r\n> Downloading data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 44.75it/s]\r\n> Extracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 8.54it/s]\r\n> Dataset squad downloaded and prepared to /home/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150\r\n> cce08a837ef62ffea453. Subsequent calls will reuse this data.\r\n> 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 19.77it/s]\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['id', 'title', 'context', 'question', 'answers'],\r\n> num_rows: 87599\r\n> })\r\n> validation: Dataset({\r\n> features: ['id', 'title', 'context', 'question', 'answers'],\r\n> num_rows: 10570\r\n> })\r\n> })\r\n> \r\n\r\nWhich all seems fine right, it's doing what it should be. But now, without ever leaving the IDE, I \"make a subsequent call\" to reuse the data by repeating the command. I encounter the following traceback\r\n\r\n`load_dataset(\"squad\")`\r\n\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1496, in load_dataset_builder\r\n> dataset_module = dataset_module_factory(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1151, in dataset_module_factory\r\n> ).get_module()\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 631, in get_module\r\n> data_files = DataFilesDict.from_local_or_remote(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 796, in from_local_or_remote\r\n> DataFilesList.from_local_or_remote(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 764, in from_local_or_remote\r\n> data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 369, in resolve_patterns_locally_or_by_urls\r\n> raise FileNotFoundError(error_msg)\r\n> FileNotFoundError: Unable to resolve any data file that matches '['train[-._ 0-9/]**', '**[-._ 0-9/]train[-._ 0-9/]**', 'training[-._ 0-9/]**', '**[-\r\n> ._ 0-9/]training[-._ 0-9/]**']' at /mainfs/home/yr3g17/.cache/huggingface/datasets/squad with any supported extension ['csv', 'tsv', 'json', 'jsonl',\r\n> 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'gr\r\n> ib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', '\r\n> mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', '\r\n> emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'G\r\n> RIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG',\r\n> 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF',\r\n> 'EMF', 'XBM', 'XPM', 'aiff', 'au', 'avr', 'caf', 'flac', 'htk', 'svx', 'mat4', 'mat5', 'mpc2k', 'ogg', 'paf', 'pvf', 'raw', 'rf64', 'sd2', 'sds', 'ir\r\n> cam', 'voc', 'w64', 'wav', 'nist', 'wavex', 'wve', 'xi', 'mp3', 'opus', 'AIFF', 'AU', 'AVR', 'CAF', 'FLAC', 'HTK', 'SVX', 'MAT4', 'MAT5', 'MPC2K', 'O\r\n> GG', 'PAF', 'PVF', 'RAW', 'RF64', 'SD2', 'SDS', 'IRCAM', 'VOC', 'W64', 'WAV', 'NIST', 'WAVEX', 'WVE', 'XI', 'MP3', 'OPUS', 'zip']\r\n\r\nIt doesn't even appear like I can reliably repeat this process. I'll nuke squad files in my dataset cache and run the Python code again (which downloads a new copy of the dataset to cache). It will either fail (as it just did in the quote above), or it will successfully recall the dataset.\r\n\r\nI repeated this nuking process a few times until calling load_dataset was reliably giving me the correct result (no filelocking issues or tracebacks). I then sent the test script as a job to the supercomputer compute nodes (which do not have internet access and therefore depend on cached data from Linux machine 1 login nodes)\r\n\r\n> Using the latest cached version of the module from /home/yr3g17/.cache/huggingface/modules/datasets_modules/datasets/squad/8730650fed465361f38ac4d810\r\n> ccdd16e8fc87b56498e52fb7e2cadaefc1f177 (last modified on Tue Feb 14 10:12:56 2023) since it couldn't be found locally at squad., or remotely on the Hugging Face Hub.\r\n> Traceback (most recent call last):\r\n> File \"/mainfs/scratch/yr3g17/squad_qanswering/3054408/0/../../main.py\", line 5, in <module>\r\n> dataset = load_dataset(\"squad\")\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1522, in load_dataset_builder\r\n> builder_instance: DatasetBuilder = builder_cls(\r\n> TypeError: 'NoneType' object is not callable\r\n\r\nand I have absolutely no idea why the second and third machines are producing different tracebacks. I have previously run these exact scripts successfully on the login and compute nodes of the supercomputer, this issue I'm raising has appeared fairly recently for me. This, is where I encounter the TypeError that I opened this issue with, which I was able to traceback (using my laptop before it too started not working) to whatever was dynamically importing \"builder_cls\". That bit of code wasn't doing importing builder_cls correctly and would effectively make the assignment \"builder_cls=None\" resulting in the TypeError. Does any of this help?",
"I'm back on linux machine 1 (login node) now. After submitting that as a job to machine 2 and it failing with TypeError, linux machine 1 now produces identical traceback to machine 2:\r\n\r\n> (arkroyal) [yr3g17@cyan52 squad_qanswering]$ python\r\n> Python 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] on linux\r\n> Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>\r\n> from datasets import load_dataset\r\n> load_dataset(\"squad\")\r\n>\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1522, in load_dataset_builder\r\n> builder_instance: DatasetBuilder = builder_cls(\r\n> TypeError: 'NoneType' object is not callable\r\n\r\nI thought it might be useful to provide you with my cache file structure:\r\n\r\n>_home_yr3g17_.cache_huggingface_datasets_casino_default_1.1.0_302c3b1ac78c48091deabe83a11f4003c7b472a4e11a8eb92799653785bd5da1.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_imdb_plain_text_1.0.0_2fdd8b9bcadd6e7055e742a706876ba43f19faee861df134affd7a3f60fc38a1.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_squad_plain_text_1.0.0_d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_yelp_review_full_yelp_review_full_1.0.0_e8e18e19d7be9e75642fc66b198abadb116f73599ec89a69ba5dd8d1e57ba0bf.lock\r\n> casino\r\n> downloads\r\n> imdb\r\n> json\r\n> squad\r\n> squad_v2\r\n> yelp_review_full\r\n\r\nThe inside of squad/plain_text/1.0.0/ looks like\r\n\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453.incomplete_info.lock\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453_builder.lock\r\n",
"I see this is quite a complex use case...\r\n\r\nLet's try multiple things:\r\n- First, update `datasets` and make sure you use the same version in all machines, so that we can easily compare different behaviors.\r\n ```\r\n pip install -U datasets\r\n ```\r\n- Second, wherever you run the `load_dataset(\"squad\")` command, make sure there is not a local directory named \"squad\". The datasets library gives priority to any local file/directory over the datasets on the Hugging Face Hub\r\n - I tell you this, because in one of your trace backs, it seems it refers to a local directory:\r\n ```\r\n Downloading and preparing dataset json/squad to /home/yr3g17/.cache/huggingface/datasets/json/squad-d733af945be1d2c2/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...\r\n ```\r\n- Third, to use the \"squad\" dataset from the Hub, you need to have internet connection, so that you can download the \"squad\" Python loading script from the Hub. Do all your machines have internet connection?\r\n - I ask this because of this error message:\r\n ```\r\n Using the latest cached version of the module from /home/yr3g17/.cache/huggingface/modules/datasets_modules/datasets/squad/8730650fed465361f38ac4d810ccdd16e8fc87b56498e52fb7e2cadaefc1f177 (last modified on Tue Feb 14 10:12:56 2023) since it couldn't be found locally at squad., or remotely on the Hugging Face Hub.\r\n ```\r\n- Fourth, to be sure that we avoid any issues with the cache, it is a good idea to remove it and regenerate it. Remove `.cache/huggingface/datasets` and also `.cache/huggingface/modules`\r\n- Fifth, as an additional debugging tool, let's be sure we use the latest \"squad\" Python loading script by passing the revision parameter:\r\n ```\r\n ds = load_dataset(\"squad\", revision=\"5fe18c4c680f9922d794e3f4dd673a751c74ee37\")\r\n ```",
"Additionally, we just had an infrastructure issue on the Hugging Face Hub at around 11:30 today. That might have contributed to the connectivity issue... It is fixed now.\r\n\r\nhttps://status.huggingface.co/",
"Hi again, thanks for your help and insight Albert Villanova.\r\n\r\nIt's all working now, so thank you for that. For the benefit of anyone else who ends up in this thread, I solved the problem by addressing Albert's advice:\r\n\r\n(1) Both Windows and Linux machine 1 (have internet access) and can now access the SQuAD dataset. The supercomputer login node can only access version 2.7.1, but my Windows laptop is running on datasets 2.11.0 just fine. I suspect it was just a perfect storm alongside the aforementioned \"infrastructure issue\".\r\n\r\n(2) I did have a local directory called squad, because I was using a local copy of evaluate's \"SQuAD\" metric. The supercomputer compute nodes do not have internet access and treat `metric = evaluate.load('<x>')` as a way of loading a metric at the local path `./<x>/<x>.py`, which could've been a related issue as I was storing the metric under `squad/squad.py`. Don't be lazy like me and store the evaluation code under a path with a name that can be misinterpreted.\r\n\r\n(3) I can't give internet access to the supercomputer compute nodes, so local files do just fine here.\r\n\r\n(4) The windows and Linux machine 1 can both access the internet and were getting fresh copies of the dataset from the huggingface hub. Linux machine 2 was working after I cleared the contents of ~/.cache/huggingface/....\r\n\r\nI feel silly now, knowing it was all so simple! Sorry about that Albert, and thanks again for the help. I've not raised a Github issue like this before, so I'm not sure if I should be close my own issues or if this is something you guys do?",
"Thanks for your detailed feedback which for sure will be useful to other community members."
] | "2023-04-18T07:10:56Z" | "2023-04-20T10:27:23Z" | "2023-04-20T10:27:22Z" | NONE | null | null | null | ### Describe the bug
There is an issue that seems to be unique to the "squad" dataset, in which it cannot be loaded using standard methods. This issue is most quickly reproduced from the command line, using the HF examples to verify a dataset is loaded properly.
This is not a problem with "squad_v2" dataset for example.
### Steps to reproduce the bug
cmd line
> $ python -c "from datasets import load_dataset; print(load_dataset('squad', split='train')[0])"
OR
Python IDE
> from datasets import load_dataset
> load_dataset("squad")
### Expected behavior
I expected to either see the output described here from running the very same command in command line ([https://huggingface.co/docs/datasets/installation]), or any output that does not raise Python's TypeError.
There is some funky behaviour in the dataset builder portion of the codebase that means it is trying to import the squad dataset with an incorrect path, or the squad dataset couldn't be downloaded. I'm not really sure what the problem is beyond that. Messing around with caching I did manage to get it to load the dataset once, and then couldn't repeat this.
### Environment info
datasets=2.7.1 **or** 2.10.1, python=3.10.8, Linux 3.10.0-1160.36.2.el7.x86_64 **or** Windows 10-64
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5768/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5768/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2935/comments | https://api.github.com/repos/huggingface/datasets/issues/2935/events | https://github.com/huggingface/datasets/pull/2935 | 999,518,469 | PR_kwDODunzps4r5j8B | 2,935 | Add Jigsaw unintended Bias | {
"avatar_url": "https://avatars.githubusercontent.com/u/494951?v=4",
"events_url": "https://api.github.com/users/Iwontbecreative/events{/privacy}",
"followers_url": "https://api.github.com/users/Iwontbecreative/followers",
"following_url": "https://api.github.com/users/Iwontbecreative/following{/other_user}",
"gists_url": "https://api.github.com/users/Iwontbecreative/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Iwontbecreative",
"id": 494951,
"login": "Iwontbecreative",
"node_id": "MDQ6VXNlcjQ5NDk1MQ==",
"organizations_url": "https://api.github.com/users/Iwontbecreative/orgs",
"received_events_url": "https://api.github.com/users/Iwontbecreative/received_events",
"repos_url": "https://api.github.com/users/Iwontbecreative/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Iwontbecreative/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Iwontbecreative/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Iwontbecreative"
} | [] | closed | false | null | [] | null | [
"Note that the tests seem to fail because of a bug in an Exception at the moment, see: https://github.com/huggingface/datasets/pull/2936 for the fix",
"@lhoestq implemented your changes, I think this might be ready for another look.",
"Thanks @lhoestq, implemented the changes, let me know if anything else pops up."
] | "2021-09-17T16:12:31Z" | "2021-09-24T10:41:52Z" | "2021-09-24T10:41:52Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2935.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2935",
"merged_at": "2021-09-24T10:41:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2935.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2935"
} | Hi,
Here's a first attempt at this dataset. Would be great if it could be merged relatively quickly as it is needed for Bigscience-related stuff.
This requires manual download, and I had some trouble generating dummy_data in this setting, so welcoming feedback there. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2935/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2935/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3008 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3008/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3008/comments | https://api.github.com/repos/huggingface/datasets/issues/3008/events | https://github.com/huggingface/datasets/pull/3008 | 1,014,849,163 | PR_kwDODunzps4sn7iU | 3,008 | Fix precision/recall metrics with None average | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2021-10-04T07:54:15Z" | "2021-10-04T09:29:37Z" | "2021-10-04T09:29:36Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3008.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3008",
"merged_at": "2021-10-04T09:29:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3008.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3008"
} | Related to issue #2979 and PR #2992. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3008/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3008/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4609 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4609/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4609/comments | https://api.github.com/repos/huggingface/datasets/issues/4609/events | https://github.com/huggingface/datasets/issues/4609 | 1,290,392,083 | I_kwDODunzps5M6dIT | 4,609 | librispeech dataset has to download whole subset when specifing the split to use | {
"avatar_url": "https://avatars.githubusercontent.com/u/73462159?v=4",
"events_url": "https://api.github.com/users/sunhaozhepy/events{/privacy}",
"followers_url": "https://api.github.com/users/sunhaozhepy/followers",
"following_url": "https://api.github.com/users/sunhaozhepy/following{/other_user}",
"gists_url": "https://api.github.com/users/sunhaozhepy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sunhaozhepy",
"id": 73462159,
"login": "sunhaozhepy",
"node_id": "MDQ6VXNlcjczNDYyMTU5",
"organizations_url": "https://api.github.com/users/sunhaozhepy/orgs",
"received_events_url": "https://api.github.com/users/sunhaozhepy/received_events",
"repos_url": "https://api.github.com/users/sunhaozhepy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sunhaozhepy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunhaozhepy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sunhaozhepy"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi! You can use streaming to fetch only a subset of the data:\r\n```python\r\nraw_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"train.100\", streaming=True)\r\n```\r\nAlso, we plan to make it possible to download a particular split in the non-streaming mode, but this task is not easy due to how our dataset scripts are structured.",
"Hi,\r\n\r\nThat's a great help. Thank you very much."
] | "2022-06-30T16:38:24Z" | "2022-07-12T21:44:32Z" | "2022-07-12T21:44:32Z" | NONE | null | null | null | ## Describe the bug
librispeech dataset has to download whole subset when specifing the split to use
## Steps to reproduce the bug
see below
# Sample code to reproduce the bug
```
!pip install datasets
from datasets import load_dataset
raw_dataset = load_dataset("librispeech_asr", "clean", split="train.100")
```
## Expected results
The split "train.clean.100" is downloaded.
## Actual results
All four splits in "clean" subset is downloaded.
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4609/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4609/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4740 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4740/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4740/comments | https://api.github.com/repos/huggingface/datasets/issues/4740/events | https://github.com/huggingface/datasets/pull/4740 | 1,316,478,007 | PR_kwDODunzps48BX5l | 4,740 | Fix multiprocessing in map_nested | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq as a workaround to preserve previous behavior, the parameter `multiprocessing_min_length=16` is passed from `download` to `map_nested`, so that multiprocessing is only used if at least 16 files to be downloaded.\r\n\r\nNote that there is a small breaking change (I think previously it was unintended behavior, so that I have fixed it):\r\n- Before (with default `num_proc=16`) if there were 16 files to be downloaded, multiprocessing was not used\r\n- Now (with default `num_proc=16`) if there are 16 files to be downloaded, multiprocessing is used",
"Thanks for the workaround !"
] | "2022-07-25T08:44:19Z" | "2022-07-28T10:53:23Z" | "2022-07-28T10:40:31Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4740.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4740",
"merged_at": "2022-07-28T10:40:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4740.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4740"
} | As previously discussed:
Before, multiprocessing was not used in `map_nested` if `num_proc` was greater than or equal to `len(iterable)`.
- Multiprocessing was not used e.g. when passing `num_proc=20` but having 19 files to download
- As by default, `DownloadManager` sets `num_proc=16`, before multiprocessing was only used when `len(iterable)>16` by default
Now, if `num_proc` is greater than or equal to ``len(iterable)``, `num_proc` is set to ``len(iterable)`` and multiprocessing is used.
- We pass the variable `parallel_min_length=16`, so that multiprocessing is only used if at least 16 files to be downloaded
- ~As by default, `DownloadManager` sets `num_proc=16`, now multiprocessing is used when `len(iterable)>1` by default~
See discussion below.
~After having had to fix some tests (87602ac), I am wondering:~
- ~do we want to have multiprocessing by default?~
- ~please note that `DownloadManager.download` sets `num_proc=16` by default~
- ~or would it be better to ask the user to set it explicitly if they want multiprocessing (and default to `num_proc=1`)?~
Fix #4636.
CC: @nateraw | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4740/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4740/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3743 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3743/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3743/comments | https://api.github.com/repos/huggingface/datasets/issues/3743/events | https://github.com/huggingface/datasets/pull/3743 | 1,141,176,011 | PR_kwDODunzps4y-2Do | 3,743 | initial monash time series forecasting repository | {
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kashif",
"id": 8100,
"login": "kashif",
"node_id": "MDQ6VXNlcjgxMDA=",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"repos_url": "https://api.github.com/users/kashif/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kashif"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI fails are unrelated to this PR, merging !",
"thanks 🙇🏽 "
] | "2022-02-17T10:51:31Z" | "2022-03-21T09:54:41Z" | "2022-03-21T09:50:16Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3743.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3743",
"merged_at": "2022-03-21T09:50:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3743.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3743"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3743/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3743/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2988 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2988/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2988/comments | https://api.github.com/repos/huggingface/datasets/issues/2988/events | https://github.com/huggingface/datasets/issues/2988 | 1,011,148,017 | I_kwDODunzps48ROTx | 2,988 | IndexError: Invalid key: 14 is out of bounds for size 0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi ! Could you check the length of the `self.dataset` object (i.e. the Dataset object passed to the data loader) ? It looks like the dataset is empty.\r\nNot sure why the SWA optimizer would cause this though.",
"Any updates on this? \r\nThe same error occurred to me too when running `cardiffnlp/twitter-roberta-base-sentiment` on a custom dataset. This happened when I tried to do `model = torch.nn.DataParallel(model, device_ids=[0, 1, 2, 3])` without using sagemaker distribution. \r\nPython: 3.6.13\r\ndatasets: 1.6.2",
"Hi @ruisi-su, do you have this issue while using SWA as well, or just data parallel ?\r\n\r\nIf you have a code example to reproduce this issue it would also be helpful",
"@lhoestq I had this issue without SWA. I followed [this](https://github.com/huggingface/notebooks/blob/master/sagemaker/03_distributed_training_data_parallelism/sagemaker-notebook.ipynb) notebook to utilize multiple gpus on the [roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) model. This tutorial could only work if I am on `ml.p3.16xlarge`, which I don't have access to. So I tried using just `model = torch.nn.DataParallel(model, device_ids=[0, 1, 2, 3]` before calling `trainer.fit()`. But maybe this is not the right way to do distributed training. I can provide a code example if that will be more helpful.",
"It might be an issue with old versions of `datasets`, can you try updating `datasets` ?",
"FYI I encountered the exact same error using the latest versions of `datasets`, `transformers` and `pyarrow`, without using any kind of SWA or dataparallel: \r\n\r\n```\r\n# packages in environment at C:\\Users\\zhang\\mambaforge:\r\n#\r\n# Name Version Build Channel\r\ncudatoolkit 11.0.3 h3f58a73_9 https://mirrors.ustc.edu.cn/anaconda/cloud/conda-forge\r\ndatasets 1.17.0 pypi_0 pypi\r\npyarrow 6.0.1 pypi_0 pypi\r\npytorch 1.7.1 py3.9_cuda110_cudnn8_0 pytorch\r\ntornado 6.1 py39hb82d6ee_2 https://mirrors.ustc.edu.cn/anaconda/cloud/conda-forge\r\n```\r\n\r\n```\r\n> python --version\r\n> 3.9.7\r\n```",
"Same error here! Datasets version `1.18.3` freshly updated.\r\n\r\n`IndexError: Invalid key: 90 is out of bounds for size 0`\r\n\r\nMy task is finetuning the model for token classification.\r\n\r\n**Solved**: I make a mistake while updating the dataset during the map, you should check that you return the correct values.\r\n",
"cc @sgugger This probably comes from the `Trainer` removing all the columns of a dataset, do you think we can improve the error message in this case ?",
"The `Trainer` clearly logs when it removes columns in the dataset. I'm not too sure of where the bug appears as I haven't seen a clear reproducer. Happy to display a more helpful error message, but I'd need a reproducer to see what the exact problem is to design the right test and warning :-) ",
"Well, if I can try to suggest how to reproduce, please try by do not returning any updated content in the map function used to tokenize input (e.g., in TokenClassification). I can leave here my wrong version for reference:\r\n\r\n```python\r\ndef preprocess_function(examples):\r\n\r\n text = examples[\"text\"]\r\n \r\n inputs = tokenizer(\r\n text,\r\n max_length=512,\r\n truncation=\"only_second\",\r\n return_offsets_mapping=True,\r\n padding=\"max_length\",\r\n )\r\n\r\n offset_mapping = inputs.pop(\"offset_mapping\")\r\n # ... processing code\r\n\r\n inputs[\"labels\"] = label_ids\r\n #return inputs\r\n \r\ntrain_ds = train_ds.map(preprocess_function, batched=False)\r\ntest_ds = test_ds.map(preprocess_function, batched=False)\r\neval_ds = eval_ds.map(preprocess_function, batched=False)\r\n```\r\n\r\nOf course, returning inputs solved the problem. As suggestion, a possible error message could display \"IndexError: the `key` required by trainer are not found in the dataset\" (just an hypothesis, I think there could be something better). \r\n\r\nPlease tell me if you need more details to reproduce, glad to help!",
"That's the thing though. The `Trainer` has no idea which inputs are required or not since all models can have different kinds of inputs, and it can work for models outside of the Transformers library. I can add a clear error message if I get an empty batch, as this is easy to detect, but that's pretty much it.",
"I think that it could be enough to ease the identification of the problem.",
"Done in [this commit](https://github.com/huggingface/transformers/commit/c87cfd653c4de3d4743a9ae09d749282d94d5829)"
] | "2021-09-29T16:04:24Z" | "2022-04-10T14:49:49Z" | "2022-04-10T14:49:49Z" | NONE | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
Hi. I am trying to implement stochastic weighted averaging optimizer with transformer library as described here https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ , for this I am using a run_clm.py codes which is working fine before adding SWA optimizer, the moment I modify the model with `swa_model = AveragedModel(model)` in this script, I am getting the below error, since I am NOT touching the dataloader part, I am confused why this is occurring, I very much appreciate your opinion on this @lhoestq
## Steps to reproduce the bug
```
Traceback (most recent call last):
File "run_clm.py", line 723, in <module>
main()
File "run_clm.py", line 669, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/transformers/trainer.py", line 1258, in train
for step, inputs in enumerate(epoch_iterator):
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1530, in __getitem__
format_kwargs=self._format_kwargs,
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1517, in _getitem
pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/formatting/formatting.py", line 368, in query_table
_check_valid_index_key(key, size)
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/formatting/formatting.py", line 311, in _check_valid_index_key
raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
IndexError: Invalid key: 14 is out of bounds for size 0
```
## Expected results
not getting the index error
## Actual results
Please see the above
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets 1.12.1
- Platform: linux
- Python version: 3.7.11
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2988/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2988/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4018 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4018/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4018/comments | https://api.github.com/repos/huggingface/datasets/issues/4018/events | https://github.com/huggingface/datasets/pull/4018 | 1,180,622,816 | PR_kwDODunzps41Aj7g | 4,018 | Replace yelp_review_full data url | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-03-25T10:37:18Z" | "2022-03-25T15:01:02Z" | "2022-03-25T14:56:10Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4018.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4018",
"merged_at": "2022-03-25T14:56:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4018.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4018"
} | I replaced the Google Drive URL of the Yelp review dataset by the FastAI one, since we've had some issues with Google Drive.
Close https://github.com/huggingface/datasets/issues/4005 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4018/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4018/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5728 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5728/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5728/comments | https://api.github.com/repos/huggingface/datasets/issues/5728/events | https://github.com/huggingface/datasets/issues/5728 | 1,661,925,932 | I_kwDODunzps5jDvos | 5,728 | The order of data split names is nondeterministic | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | "2023-04-11T07:31:25Z" | "2023-04-26T15:05:13Z" | "2023-04-26T15:05:13Z" | MEMBER | null | null | null | After this CI error: https://github.com/huggingface/datasets/actions/runs/4639528358/jobs/8210492953?pr=5718
```
FAILED tests/test_data_files.py::test_get_data_files_patterns[data_file_per_split4] - AssertionError: assert ['random', 'train'] == ['train', 'random']
At index 0 diff: 'random' != 'train'
Full diff:
- ['train', 'random']
+ ['random', 'train']
```
I have checked locally and found out that the data split order is nondeterministic.
This is caused by the use of `set` for sharded splits. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5728/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5728/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6004/comments | https://api.github.com/repos/huggingface/datasets/issues/6004/events | https://github.com/huggingface/datasets/pull/6004 | 1,786,636,368 | PR_kwDODunzps5UjN2h | 6,004 | Misc improvements | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006897 / 0.011353 (-0.004456) | 0.004207 / 0.011008 (-0.006802) | 0.104828 / 0.038508 (0.066320) | 0.048054 / 0.023109 (0.024945) | 0.373991 / 0.275898 (0.098093) | 0.426740 / 0.323480 (0.103260) | 0.005540 / 0.007986 (-0.002446) | 0.003531 / 0.004328 (-0.000797) | 0.079304 / 0.004250 (0.075053) | 0.066996 / 0.037052 (0.029944) | 0.370675 / 0.258489 (0.112186) | 0.414154 / 0.293841 (0.120313) | 0.031567 / 0.128546 (-0.096979) | 0.008843 / 0.075646 (-0.066803) | 0.357426 / 0.419271 (-0.061845) | 0.067040 / 0.043533 (0.023508) | 0.362384 / 0.255139 (0.107245) | 0.376056 / 0.283200 (0.092856) | 0.032985 / 0.141683 (-0.108697) | 1.560603 / 1.452155 (0.108448) | 1.619024 / 1.492716 (0.126308) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229059 / 0.018006 (0.211053) | 0.440513 / 0.000490 (0.440023) | 0.004647 / 0.000200 (0.004447) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029517 / 0.037411 (-0.007894) | 0.120974 / 0.014526 (0.106448) | 0.125070 / 0.176557 (-0.051486) | 0.184695 / 0.737135 (-0.552441) | 0.130244 / 0.296338 (-0.166095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436930 / 0.215209 (0.221721) | 4.356118 / 2.077655 (2.278463) | 2.049169 / 1.504120 (0.545049) | 1.842898 / 1.541195 (0.301703) | 1.918948 / 1.468490 (0.450458) | 0.553573 / 4.584777 (-4.031204) | 3.883195 / 3.745712 (0.137483) | 3.209780 / 5.269862 (-2.060081) | 1.551707 / 4.565676 (-3.013970) | 0.068181 / 0.424275 (-0.356094) | 0.012370 / 0.007607 (0.004762) | 0.539899 / 0.226044 (0.313854) | 5.380008 / 2.268929 (3.111079) | 2.518178 / 55.444624 (-52.926446) | 2.174190 / 6.876477 (-4.702286) | 2.317812 / 2.142072 (0.175740) | 0.674154 / 4.805227 (-4.131073) | 0.149313 / 6.500664 (-6.351351) | 0.068297 / 0.075469 (-0.007172) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.261426 / 1.841788 (-0.580362) | 15.316378 / 8.074308 (7.242070) | 13.573512 / 10.191392 (3.382120) | 0.190022 / 0.680424 (-0.490401) | 0.018697 / 0.534201 (-0.515504) | 0.448122 / 0.579283 (-0.131161) | 0.435044 / 0.434364 (0.000681) | 0.550065 / 0.540337 (0.009728) | 0.653547 / 1.386936 (-0.733389) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007116 / 0.011353 (-0.004237) | 0.004375 / 0.011008 (-0.006633) | 0.081793 / 0.038508 (0.043285) | 0.047980 / 0.023109 (0.024871) | 0.392185 / 0.275898 (0.116287) | 0.462263 / 0.323480 (0.138783) | 0.005574 / 0.007986 (-0.002412) | 0.003552 / 0.004328 (-0.000776) | 0.080413 / 0.004250 (0.076162) | 0.065539 / 0.037052 (0.028487) | 0.413137 / 0.258489 (0.154648) | 0.467377 / 0.293841 (0.173536) | 0.034386 / 0.128546 (-0.094160) | 0.009183 / 0.075646 (-0.066464) | 0.087542 / 0.419271 (-0.331730) | 0.053954 / 0.043533 (0.010421) | 0.385096 / 0.255139 (0.129957) | 0.404900 / 0.283200 (0.121701) | 0.025908 / 0.141683 (-0.115775) | 1.550159 / 1.452155 (0.098005) | 1.598794 / 1.492716 (0.106078) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246222 / 0.018006 (0.228216) | 0.441095 / 0.000490 (0.440605) | 0.006863 / 0.000200 (0.006663) | 0.000109 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032179 / 0.037411 (-0.005233) | 0.120112 / 0.014526 (0.105586) | 0.129326 / 0.176557 (-0.047230) | 0.184542 / 0.737135 (-0.552593) | 0.135038 / 0.296338 (-0.161300) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459002 / 0.215209 (0.243793) | 4.580258 / 2.077655 (2.502604) | 2.296689 / 1.504120 (0.792569) | 2.104338 / 1.541195 (0.563143) | 2.182896 / 1.468490 (0.714406) | 0.546447 / 4.584777 (-4.038330) | 3.854047 / 3.745712 (0.108335) | 1.873829 / 5.269862 (-3.396032) | 1.116484 / 4.565676 (-3.449193) | 0.067158 / 0.424275 (-0.357117) | 0.012035 / 0.007607 (0.004428) | 0.556642 / 0.226044 (0.330597) | 5.574436 / 2.268929 (3.305508) | 2.828223 / 55.444624 (-52.616402) | 2.519851 / 6.876477 (-4.356626) | 2.668594 / 2.142072 (0.526521) | 0.675989 / 4.805227 (-4.129238) | 0.146075 / 6.500664 (-6.354589) | 0.067788 / 0.075469 (-0.007681) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.345958 / 1.841788 (-0.495830) | 15.672748 / 8.074308 (7.598440) | 14.937583 / 10.191392 (4.746191) | 0.163479 / 0.680424 (-0.516945) | 0.018364 / 0.534201 (-0.515837) | 0.433296 / 0.579283 (-0.145987) | 0.432463 / 0.434364 (-0.001901) | 0.512000 / 0.540337 (-0.028338) | 0.619397 / 1.386936 (-0.767539) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0832d48a07ed00b406271f4b4439e6d54ae38ebf \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010097 / 0.011353 (-0.001256) | 0.005070 / 0.011008 (-0.005939) | 0.118638 / 0.038508 (0.080130) | 0.043651 / 0.023109 (0.020542) | 0.356074 / 0.275898 (0.080176) | 0.414578 / 0.323480 (0.091098) | 0.005939 / 0.007986 (-0.002046) | 0.004927 / 0.004328 (0.000598) | 0.089545 / 0.004250 (0.085294) | 0.067533 / 0.037052 (0.030481) | 0.371550 / 0.258489 (0.113061) | 0.417808 / 0.293841 (0.123967) | 0.045186 / 0.128546 (-0.083361) | 0.015763 / 0.075646 (-0.059883) | 0.393304 / 0.419271 (-0.025967) | 0.065123 / 0.043533 (0.021591) | 0.345057 / 0.255139 (0.089918) | 0.378809 / 0.283200 (0.095610) | 0.033243 / 0.141683 (-0.108440) | 1.679956 / 1.452155 (0.227802) | 1.775456 / 1.492716 (0.282739) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229723 / 0.018006 (0.211717) | 0.554630 / 0.000490 (0.554140) | 0.008729 / 0.000200 (0.008529) | 0.000183 / 0.000054 (0.000129) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027284 / 0.037411 (-0.010128) | 0.114741 / 0.014526 (0.100215) | 0.129188 / 0.176557 (-0.047369) | 0.189270 / 0.737135 (-0.547866) | 0.126000 / 0.296338 (-0.170339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.580417 / 0.215209 (0.365208) | 5.829337 / 2.077655 (3.751683) | 2.421191 / 1.504120 (0.917071) | 2.063673 / 1.541195 (0.522479) | 2.133427 / 1.468490 (0.664937) | 0.830964 / 4.584777 (-3.753813) | 5.107139 / 3.745712 (1.361427) | 4.599451 / 5.269862 (-0.670410) | 2.406502 / 4.565676 (-2.159175) | 0.100422 / 0.424275 (-0.323853) | 0.011850 / 0.007607 (0.004243) | 0.741881 / 0.226044 (0.515836) | 7.425689 / 2.268929 (5.156760) | 3.068948 / 55.444624 (-52.375676) | 2.496292 / 6.876477 (-4.380184) | 2.566420 / 2.142072 (0.424348) | 1.093084 / 4.805227 (-3.712144) | 0.224106 / 6.500664 (-6.276558) | 0.084549 / 0.075469 (0.009080) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.416315 / 1.841788 (-0.425473) | 16.306901 / 8.074308 (8.232593) | 19.792419 / 10.191392 (9.601027) | 0.224223 / 0.680424 (-0.456201) | 0.026385 / 0.534201 (-0.507816) | 0.463460 / 0.579283 (-0.115823) | 0.598385 / 0.434364 (0.164021) | 0.543981 / 0.540337 (0.003644) | 0.647454 / 1.386936 (-0.739482) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009470 / 0.011353 (-0.001883) | 0.004800 / 0.011008 (-0.006208) | 0.094276 / 0.038508 (0.055768) | 0.045157 / 0.023109 (0.022048) | 0.397302 / 0.275898 (0.121404) | 0.474213 / 0.323480 (0.150733) | 0.005826 / 0.007986 (-0.002160) | 0.003724 / 0.004328 (-0.000605) | 0.090060 / 0.004250 (0.085809) | 0.066671 / 0.037052 (0.029618) | 0.439560 / 0.258489 (0.181071) | 0.468598 / 0.293841 (0.174757) | 0.044549 / 0.128546 (-0.083997) | 0.014000 / 0.075646 (-0.061646) | 0.110457 / 0.419271 (-0.308815) | 0.065898 / 0.043533 (0.022365) | 0.408101 / 0.255139 (0.152962) | 0.433473 / 0.283200 (0.150273) | 0.038438 / 0.141683 (-0.103245) | 1.767781 / 1.452155 (0.315626) | 1.791575 / 1.492716 (0.298859) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230257 / 0.018006 (0.212251) | 0.492280 / 0.000490 (0.491790) | 0.005110 / 0.000200 (0.004910) | 0.000119 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028854 / 0.037411 (-0.008557) | 0.111702 / 0.014526 (0.097176) | 0.122040 / 0.176557 (-0.054517) | 0.179103 / 0.737135 (-0.558032) | 0.128869 / 0.296338 (-0.167470) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.634795 / 0.215209 (0.419586) | 6.204760 / 2.077655 (4.127105) | 2.692479 / 1.504120 (1.188359) | 2.324260 / 1.541195 (0.783066) | 2.380640 / 1.468490 (0.912149) | 0.887827 / 4.584777 (-3.696950) | 5.251648 / 3.745712 (1.505935) | 2.632767 / 5.269862 (-2.637095) | 1.745721 / 4.565676 (-2.819955) | 0.108364 / 0.424275 (-0.315911) | 0.013409 / 0.007607 (0.005802) | 0.783427 / 0.226044 (0.557383) | 7.765144 / 2.268929 (5.496216) | 3.340686 / 55.444624 (-52.103938) | 2.715340 / 6.876477 (-4.161137) | 2.768604 / 2.142072 (0.626531) | 1.119746 / 4.805227 (-3.685481) | 0.210804 / 6.500664 (-6.289860) | 0.072600 / 0.075469 (-0.002869) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.517334 / 1.841788 (-0.324454) | 17.046837 / 8.074308 (8.972529) | 19.371090 / 10.191392 (9.179698) | 0.194275 / 0.680424 (-0.486148) | 0.026712 / 0.534201 (-0.507488) | 0.462731 / 0.579283 (-0.116552) | 0.568958 / 0.434364 (0.134595) | 0.555707 / 0.540337 (0.015370) | 0.663654 / 1.386936 (-0.723283) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5d20476b1d4c8e11e0ffafc1570cbf4bd19011cf \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006423 / 0.011353 (-0.004930) | 0.003882 / 0.011008 (-0.007126) | 0.082976 / 0.038508 (0.044468) | 0.071281 / 0.023109 (0.048171) | 0.311367 / 0.275898 (0.035469) | 0.348228 / 0.323480 (0.024748) | 0.005315 / 0.007986 (-0.002671) | 0.003326 / 0.004328 (-0.001003) | 0.064641 / 0.004250 (0.060391) | 0.056134 / 0.037052 (0.019081) | 0.314071 / 0.258489 (0.055582) | 0.360534 / 0.293841 (0.066693) | 0.030642 / 0.128546 (-0.097904) | 0.008301 / 0.075646 (-0.067345) | 0.285820 / 0.419271 (-0.133451) | 0.069241 / 0.043533 (0.025708) | 0.313995 / 0.255139 (0.058856) | 0.336656 / 0.283200 (0.053457) | 0.031686 / 0.141683 (-0.109997) | 1.467627 / 1.452155 (0.015472) | 1.536493 / 1.492716 (0.043777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196518 / 0.018006 (0.178512) | 0.458235 / 0.000490 (0.457745) | 0.005599 / 0.000200 (0.005399) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027371 / 0.037411 (-0.010040) | 0.080986 / 0.014526 (0.066460) | 0.093296 / 0.176557 (-0.083260) | 0.150592 / 0.737135 (-0.586543) | 0.094150 / 0.296338 (-0.202188) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.379412 / 0.215209 (0.164202) | 3.797927 / 2.077655 (1.720272) | 1.830654 / 1.504120 (0.326534) | 1.669569 / 1.541195 (0.128374) | 1.746738 / 1.468490 (0.278248) | 0.479536 / 4.584777 (-4.105241) | 3.592867 / 3.745712 (-0.152845) | 5.468098 / 5.269862 (0.198237) | 3.268013 / 4.565676 (-1.297663) | 0.056635 / 0.424275 (-0.367640) | 0.007224 / 0.007607 (-0.000383) | 0.456681 / 0.226044 (0.230636) | 4.566736 / 2.268929 (2.297807) | 2.362831 / 55.444624 (-53.081793) | 1.965141 / 6.876477 (-4.911336) | 2.156905 / 2.142072 (0.014833) | 0.572543 / 4.805227 (-4.232684) | 0.132203 / 6.500664 (-6.368461) | 0.059254 / 0.075469 (-0.016215) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256134 / 1.841788 (-0.585654) | 19.905438 / 8.074308 (11.831130) | 14.179556 / 10.191392 (3.988164) | 0.168043 / 0.680424 (-0.512381) | 0.018215 / 0.534201 (-0.515986) | 0.392740 / 0.579283 (-0.186543) | 0.398397 / 0.434364 (-0.035967) | 0.463806 / 0.540337 (-0.076531) | 0.616248 / 1.386936 (-0.770688) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006564 / 0.011353 (-0.004789) | 0.003923 / 0.011008 (-0.007085) | 0.063929 / 0.038508 (0.025421) | 0.073780 / 0.023109 (0.050671) | 0.360242 / 0.275898 (0.084344) | 0.395078 / 0.323480 (0.071598) | 0.005265 / 0.007986 (-0.002720) | 0.003229 / 0.004328 (-0.001100) | 0.064094 / 0.004250 (0.059843) | 0.057468 / 0.037052 (0.020416) | 0.369530 / 0.258489 (0.111041) | 0.411159 / 0.293841 (0.117318) | 0.031278 / 0.128546 (-0.097268) | 0.008424 / 0.075646 (-0.067222) | 0.070411 / 0.419271 (-0.348860) | 0.048714 / 0.043533 (0.005181) | 0.361280 / 0.255139 (0.106141) | 0.382468 / 0.283200 (0.099269) | 0.023059 / 0.141683 (-0.118624) | 1.452369 / 1.452155 (0.000215) | 1.519192 / 1.492716 (0.026475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223745 / 0.018006 (0.205739) | 0.442086 / 0.000490 (0.441596) | 0.000379 / 0.000200 (0.000179) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030919 / 0.037411 (-0.006493) | 0.088483 / 0.014526 (0.073958) | 0.101165 / 0.176557 (-0.075391) | 0.154332 / 0.737135 (-0.582804) | 0.103030 / 0.296338 (-0.193309) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414520 / 0.215209 (0.199311) | 4.126754 / 2.077655 (2.049099) | 2.142677 / 1.504120 (0.638557) | 1.995300 / 1.541195 (0.454106) | 2.101678 / 1.468490 (0.633188) | 0.481099 / 4.584777 (-4.103678) | 3.562813 / 3.745712 (-0.182900) | 3.392463 / 5.269862 (-1.877399) | 1.983943 / 4.565676 (-2.581734) | 0.056594 / 0.424275 (-0.367681) | 0.007216 / 0.007607 (-0.000391) | 0.495085 / 0.226044 (0.269041) | 4.955640 / 2.268929 (2.686712) | 2.629434 / 55.444624 (-52.815191) | 2.269577 / 6.876477 (-4.606900) | 2.357708 / 2.142072 (0.215635) | 0.612370 / 4.805227 (-4.192857) | 0.131169 / 6.500664 (-6.369495) | 0.061029 / 0.075469 (-0.014440) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.339438 / 1.841788 (-0.502350) | 19.757611 / 8.074308 (11.683303) | 14.246254 / 10.191392 (4.054862) | 0.170750 / 0.680424 (-0.509674) | 0.018192 / 0.534201 (-0.516009) | 0.395693 / 0.579283 (-0.183590) | 0.411003 / 0.434364 (-0.023361) | 0.478531 / 0.540337 (-0.061806) | 0.650291 / 1.386936 (-0.736645) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3e34d06d746688dd5d26e4c85517b7e1a2f361ca \"CML watermark\")\n"
] | "2023-07-03T18:29:14Z" | "2023-07-06T17:04:11Z" | "2023-07-06T16:55:25Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6004.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6004",
"merged_at": "2023-07-06T16:55:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6004.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6004"
} | Contains the following improvements:
* fixes a "share dataset" link in README and modifies the "hosting" part in the disclaimer section
* updates `Makefile` to also run the style checks on `utils` and `setup.py`
* deletes a test for GH-hosted datasets (no longer supported)
* deletes `convert_dataset.sh` (outdated)
* aligns `utils/release.py` with `transformers` (the current version is outdated) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6004/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6004/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4127/comments | https://api.github.com/repos/huggingface/datasets/issues/4127/events | https://github.com/huggingface/datasets/pull/4127 | 1,197,297,756 | PR_kwDODunzps4132EN | 4,127 | Add configs with processed data in medical_dialog dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-04-08T13:08:16Z" | "2022-05-06T08:39:50Z" | "2022-04-08T16:20:51Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4127.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4127",
"merged_at": "2022-04-08T16:20:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4127.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4127"
} | There exist processed data files that do not require parsing the raw data files (which can take long time).
Fix #4122. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4127/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4127/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5630 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5630/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5630/comments | https://api.github.com/repos/huggingface/datasets/issues/5630/events | https://github.com/huggingface/datasets/pull/5630 | 1,620,327,510 | PR_kwDODunzps5L1ahF | 5,630 | adds early exit if url is `PathLike` | {
"avatar_url": "https://avatars.githubusercontent.com/u/44398246?v=4",
"events_url": "https://api.github.com/users/vvvm23/events{/privacy}",
"followers_url": "https://api.github.com/users/vvvm23/followers",
"following_url": "https://api.github.com/users/vvvm23/following{/other_user}",
"gists_url": "https://api.github.com/users/vvvm23/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vvvm23",
"id": 44398246,
"login": "vvvm23",
"node_id": "MDQ6VXNlcjQ0Mzk4MjQ2",
"organizations_url": "https://api.github.com/users/vvvm23/orgs",
"received_events_url": "https://api.github.com/users/vvvm23/received_events",
"repos_url": "https://api.github.com/users/vvvm23/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vvvm23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vvvm23/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vvvm23"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5630). All of your documentation changes will be reflected on that endpoint."
] | "2023-03-12T11:23:28Z" | "2023-03-15T11:58:38Z" | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5630.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5630",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5630.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5630"
} | Closes #4864
Should fix errors thrown when attempting to load `json` dataset using `pathlib.Path` in `data_files` argument. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5630/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5630/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2435 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2435/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2435/comments | https://api.github.com/repos/huggingface/datasets/issues/2435/events | https://github.com/huggingface/datasets/pull/2435 | 907,505,531 | MDExOlB1bGxSZXF1ZXN0NjU4MzQzNDE2 | 2,435 | Insert Extractive QA templates for SQuAD-like datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | null | [
"hi @lhoestq @SBrandeis i've now added the missing YAML tags, so this PR should be good to go :)",
"urgh, the windows tests are failing because of encoding issues 😢 \r\n\r\n```\r\ndataset_name = 'squad_kor_v1'\r\n\r\n @pytest.mark.parametrize(\"dataset_name\", get_changed_datasets(repo_path))\r\n def test_changed_dataset_card(dataset_name):\r\n card_path = repo_path / \"datasets\" / dataset_name / \"README.md\"\r\n assert card_path.exists()\r\n error_messages = []\r\n try:\r\n ReadMe.from_readme(card_path)\r\n except Exception as readme_error:\r\n error_messages.append(f\"The following issues have been found in the dataset cards:\\nREADME:\\n{readme_error}\")\r\n try:\r\n DatasetMetadata.from_readme(card_path)\r\n except Exception as metadata_error:\r\n error_messages.append(\r\n f\"The following issues have been found in the dataset cards:\\nYAML tags:\\n{metadata_error}\"\r\n )\r\n \r\n if error_messages:\r\n> raise ValueError(\"\\n\".join(error_messages))\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README:\r\nE 'charmap' codec can't decode byte 0x90 in position 2283: character maps to <undefined>\r\nE The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE 'charmap' codec can't decode byte 0x90 in position 2283: character maps to <undefined>\r\n```",
"Seems like the encoding issues on windows is also being tackled in #2418 - will see if this solves the problem in the current PR"
] | "2021-05-31T14:09:11Z" | "2021-06-03T14:34:30Z" | "2021-06-03T14:32:27Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2435.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2435",
"merged_at": "2021-06-03T14:32:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2435.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2435"
} | This PR adds task templates for 9 SQuAD-like templates with the following properties:
* 1 config
* A schema that matches the `squad` one (i.e. same column names, especially for the nested `answers` column because the current implementation does not support casting with mismatched columns. see #2434)
* Less than 20GB (my laptop can't handle more right now)
The aim of this PR is to provide a few datasets to experiment with the task template integration in other libraries / services.
PR #2429 should be merged before this one.
cc @abhi1thakur | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2435/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2435/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4421 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4421/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4421/comments | https://api.github.com/repos/huggingface/datasets/issues/4421/events | https://github.com/huggingface/datasets/pull/4421 | 1,253,059,467 | PR_kwDODunzps44szxR | 4,421 | Add extractor for bzip2-compressed files | {
"avatar_url": "https://avatars.githubusercontent.com/u/2910707?v=4",
"events_url": "https://api.github.com/users/asivokon/events{/privacy}",
"followers_url": "https://api.github.com/users/asivokon/followers",
"following_url": "https://api.github.com/users/asivokon/following{/other_user}",
"gists_url": "https://api.github.com/users/asivokon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/asivokon",
"id": 2910707,
"login": "asivokon",
"node_id": "MDQ6VXNlcjI5MTA3MDc=",
"organizations_url": "https://api.github.com/users/asivokon/orgs",
"received_events_url": "https://api.github.com/users/asivokon/received_events",
"repos_url": "https://api.github.com/users/asivokon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/asivokon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asivokon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/asivokon"
} | [] | closed | false | null | [] | null | [] | "2022-05-30T19:19:40Z" | "2022-06-06T15:22:50Z" | "2022-06-06T15:22:50Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4421.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4421",
"merged_at": "2022-06-06T15:22:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4421.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4421"
} | This change enables loading bzipped datasets, just like any other compressed dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4421/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4421/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3073/comments | https://api.github.com/repos/huggingface/datasets/issues/3073/events | https://github.com/huggingface/datasets/issues/3073 | 1,025,718,469 | I_kwDODunzps49IzjF | 3,073 | Import error installing with ppc64le | {
"avatar_url": "https://avatars.githubusercontent.com/u/21228908?v=4",
"events_url": "https://api.github.com/users/gcervantes8/events{/privacy}",
"followers_url": "https://api.github.com/users/gcervantes8/followers",
"following_url": "https://api.github.com/users/gcervantes8/following{/other_user}",
"gists_url": "https://api.github.com/users/gcervantes8/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gcervantes8",
"id": 21228908,
"login": "gcervantes8",
"node_id": "MDQ6VXNlcjIxMjI4OTA4",
"organizations_url": "https://api.github.com/users/gcervantes8/orgs",
"received_events_url": "https://api.github.com/users/gcervantes8/received_events",
"repos_url": "https://api.github.com/users/gcervantes8/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gcervantes8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gcervantes8/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gcervantes8"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"This seems to be an issue with importing PyArrow so I posted the problem [here](https://issues.apache.org/jira/browse/ARROW-14323), and I'm closing this issue.\r\n"
] | "2021-10-13T21:37:23Z" | "2021-10-14T16:35:46Z" | "2021-10-14T16:33:28Z" | NONE | null | null | null | ## Describe the bug
Installing the datasets library with a computer running with ppc64le seems to cause an issue when importing the datasets library.
```
python
Python 3.6.13 | packaged by conda-forge | (default, Sep 23 2021, 07:37:44)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import datasets
Illegal instruction (core dumped)
```
Error when importing
`Illegal instruction (core dumped)`
## Steps to reproduce the bug
I get this error when installing the library by using conda. I can't install with pip I believe because pyarrow only has the ppc64le library on conda forge
```
conda create --name transformers_py36_v2 python=3.6
conda activate transformers_py36_v2
conda install datasets
```
## Tracebacks
conda create --name transformers_py36_v2 python=3.6
```
Collecting package metadata (current_repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.9.2
latest version: 4.10.3
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /p/home/gerryc/.conda/envs/transformers_py36_v2
added / updated specs:
- python=3.6
The following NEW packages will be INSTALLED:
_libgcc_mutex conda-forge/linux-ppc64le::_libgcc_mutex-0.1-conda_forge
_openmp_mutex conda-forge/linux-ppc64le::_openmp_mutex-4.5-1_gnu
ca-certificates conda-forge/linux-ppc64le::ca-certificates-2021.10.8-h1084571_0
certifi pkgs/main/linux-ppc64le::certifi-2020.12.5-py36h6ffa863_0
ld_impl_linux-ppc~ conda-forge/linux-ppc64le::ld_impl_linux-ppc64le-2.36.1-ha35d02b_2
libffi conda-forge/linux-ppc64le::libffi-3.4.2-h3b9df90_4
libgcc-ng conda-forge/linux-ppc64le::libgcc-ng-11.2.0-h7698a5e_11
libgomp conda-forge/linux-ppc64le::libgomp-11.2.0-h7698a5e_11
libstdcxx-ng conda-forge/linux-ppc64le::libstdcxx-ng-11.2.0-habdf983_11
libzlib conda-forge/linux-ppc64le::libzlib-1.2.11-h339bb43_1013
ncurses conda-forge/linux-ppc64le::ncurses-6.2-hea85c5d_4
openssl conda-forge/linux-ppc64le::openssl-1.1.1l-h4e0d66e_0
pip conda-forge/noarch::pip-21.3-pyhd8ed1ab_0
python conda-forge/linux-ppc64le::python-3.6.13-h57873ef_2_cpython
readline conda-forge/linux-ppc64le::readline-8.1-h5c45dff_0
setuptools pkgs/main/linux-ppc64le::setuptools-58.0.4-py36h6ffa863_0
sqlite conda-forge/linux-ppc64le::sqlite-3.36.0-h4e2196e_2
tk conda-forge/linux-ppc64le::tk-8.6.11-h41c6715_1
wheel conda-forge/noarch::wheel-0.37.0-pyhd8ed1ab_1
xz conda-forge/linux-ppc64le::xz-5.2.5-h6eb9509_1
zlib conda-forge/linux-ppc64le::zlib-1.2.11-h339bb43_1013
Proceed ([y]/n)? y
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
# $ conda activate transformers_py36_v2
#
# To deactivate an active environment, use
#
# $ conda deactivate
```
conda activate transformers_py36_v2
conda install datasets
```
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.9.2
latest version: 4.10.3
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /p/home/gerryc/.conda/envs/transformers_py36_v2
added / updated specs:
- datasets
The following NEW packages will be INSTALLED:
abseil-cpp conda-forge/linux-ppc64le::abseil-cpp-20210324.2-h3b9df90_0
aiohttp conda-forge/linux-ppc64le::aiohttp-3.7.4.post0-py36hc33305d_0
arrow-cpp conda-forge/linux-ppc64le::arrow-cpp-5.0.0-py36hf9cf308_8_cpu
async-timeout conda-forge/noarch::async-timeout-3.0.1-py_1000
attrs conda-forge/noarch::attrs-21.2.0-pyhd8ed1ab_0
aws-c-cal conda-forge/linux-ppc64le::aws-c-cal-0.5.11-hb3fac3d_0
aws-c-common conda-forge/linux-ppc64le::aws-c-common-0.6.2-h4e0d66e_0
aws-c-event-stream conda-forge/linux-ppc64le::aws-c-event-stream-0.2.7-h76da5f2_13
aws-c-io conda-forge/linux-ppc64le::aws-c-io-0.10.5-hf6a6c7c_0
aws-checksums conda-forge/linux-ppc64le::aws-checksums-0.1.11-hfe76d68_7
aws-sdk-cpp conda-forge/linux-ppc64le::aws-sdk-cpp-1.8.186-h90855e8_3
brotlipy conda-forge/linux-ppc64le::brotlipy-0.7.0-py36hc33305d_1001
bzip2 conda-forge/linux-ppc64le::bzip2-1.0.8-h4e0d66e_4
c-ares conda-forge/linux-ppc64le::c-ares-1.17.2-h4e0d66e_0
cffi conda-forge/linux-ppc64le::cffi-1.14.6-py36h021ab3c_1
chardet conda-forge/linux-ppc64le::chardet-4.0.0-py36h270354c_1
colorama conda-forge/noarch::colorama-0.4.4-pyh9f0ad1d_0
cryptography conda-forge/linux-ppc64le::cryptography-3.4.7-py36hc71b123_0
dataclasses conda-forge/noarch::dataclasses-0.8-pyh787bdff_2
datasets conda-forge/noarch::datasets-1.12.1-pyhd8ed1ab_1
dill conda-forge/noarch::dill-0.3.4-pyhd8ed1ab_0
filelock conda-forge/noarch::filelock-3.3.0-pyhd8ed1ab_0
fsspec conda-forge/noarch::fsspec-2021.10.0-pyhd8ed1ab_0
gflags conda-forge/linux-ppc64le::gflags-2.2.2-hb209c28_1004
glog conda-forge/linux-ppc64le::glog-0.5.0-h4040248_0
grpc-cpp conda-forge/linux-ppc64le::grpc-cpp-1.40.0-h2bf711c_2
huggingface_hub conda-forge/noarch::huggingface_hub-0.0.19-pyhd8ed1ab_0
idna conda-forge/noarch::idna-2.10-pyh9f0ad1d_0
idna_ssl conda-forge/noarch::idna_ssl-1.0.0-0
importlib-metadata conda-forge/linux-ppc64le::importlib-metadata-4.8.1-py36h270354c_0
importlib_metadata conda-forge/noarch::importlib_metadata-4.8.1-hd8ed1ab_0
krb5 conda-forge/linux-ppc64le::krb5-1.19.2-haf43566_2
libblas conda-forge/linux-ppc64le::libblas-3.9.0-11_linuxppc64le_openblas
libbrotlicommon conda-forge/linux-ppc64le::libbrotlicommon-1.0.9-h4e0d66e_5
libbrotlidec conda-forge/linux-ppc64le::libbrotlidec-1.0.9-h4e0d66e_5
libbrotlienc conda-forge/linux-ppc64le::libbrotlienc-1.0.9-h4e0d66e_5
libcblas conda-forge/linux-ppc64le::libcblas-3.9.0-11_linuxppc64le_openblas
libcurl conda-forge/linux-ppc64le::libcurl-7.79.1-he415e40_1
libedit conda-forge/linux-ppc64le::libedit-3.1.20191231-h41a240f_2
libev conda-forge/linux-ppc64le::libev-4.33-h6eb9509_1
libevent conda-forge/linux-ppc64le::libevent-2.1.10-h97db324_4
libgfortran-ng conda-forge/linux-ppc64le::libgfortran-ng-11.2.0-hfdc3801_11
libgfortran5 conda-forge/linux-ppc64le::libgfortran5-11.2.0-he58fbb4_11
liblapack conda-forge/linux-ppc64le::liblapack-3.9.0-11_linuxppc64le_openblas
libnghttp2 conda-forge/linux-ppc64le::libnghttp2-1.43.0-h42039ad_1
libopenblas conda-forge/linux-ppc64le::libopenblas-0.3.17-pthreads_h486567c_1
libprotobuf conda-forge/linux-ppc64le::libprotobuf-3.18.1-h690f14c_0
libssh2 conda-forge/linux-ppc64le::libssh2-1.10.0-ha5a9321_2
libthrift conda-forge/linux-ppc64le::libthrift-0.15.0-h54f692e_1
libutf8proc conda-forge/linux-ppc64le::libutf8proc-2.6.1-h4e0d66e_0
lz4-c conda-forge/linux-ppc64le::lz4-c-1.9.3-h3b9df90_1
multidict conda-forge/linux-ppc64le::multidict-5.2.0-py36hc33305d_0
multiprocess conda-forge/linux-ppc64le::multiprocess-0.70.12.2-py36hc33305d_0
numpy conda-forge/linux-ppc64le::numpy-1.19.5-py36h86665d4_1
orc conda-forge/linux-ppc64le::orc-1.7.0-hae6b4bd_0
packaging conda-forge/noarch::packaging-21.0-pyhd8ed1ab_0
pandas conda-forge/linux-ppc64le::pandas-1.1.5-py36hab1a6e6_0
parquet-cpp conda-forge/noarch::parquet-cpp-1.5.1-2
pyarrow conda-forge/linux-ppc64le::pyarrow-5.0.0-py36h7a46c7e_8_cpu
pycparser conda-forge/noarch::pycparser-2.20-pyh9f0ad1d_2
pyopenssl conda-forge/noarch::pyopenssl-21.0.0-pyhd8ed1ab_0
pyparsing conda-forge/noarch::pyparsing-2.4.7-pyh9f0ad1d_0
pysocks conda-forge/linux-ppc64le::pysocks-1.7.1-py36h270354c_3
python-dateutil conda-forge/noarch::python-dateutil-2.8.2-pyhd8ed1ab_0
python-xxhash conda-forge/linux-ppc64le::python-xxhash-2.0.2-py36hc33305d_0
python_abi conda-forge/linux-ppc64le::python_abi-3.6-2_cp36m
pytz conda-forge/noarch::pytz-2021.3-pyhd8ed1ab_0
pyyaml conda-forge/linux-ppc64le::pyyaml-5.4.1-py36hc33305d_1
re2 conda-forge/linux-ppc64le::re2-2021.09.01-h3b9df90_0
requests conda-forge/noarch::requests-2.25.1-pyhd3deb0d_0
s2n conda-forge/linux-ppc64le::s2n-1.0.10-h97db324_0
six conda-forge/noarch::six-1.16.0-pyh6c4a22f_0
snappy conda-forge/linux-ppc64le::snappy-1.1.8-hb209c28_3
tqdm conda-forge/noarch::tqdm-4.62.3-pyhd8ed1ab_0
typing-extensions conda-forge/noarch::typing-extensions-3.10.0.2-hd8ed1ab_0
typing_extensions conda-forge/noarch::typing_extensions-3.10.0.2-pyha770c72_0
urllib3 conda-forge/noarch::urllib3-1.26.7-pyhd8ed1ab_0
xxhash conda-forge/linux-ppc64le::xxhash-0.8.0-h4e0d66e_3
yaml conda-forge/linux-ppc64le::yaml-0.2.5-h6eb9509_0
yarl conda-forge/linux-ppc64le::yarl-1.6.3-py36hc33305d_2
zipp conda-forge/noarch::zipp-3.6.0-pyhd8ed1ab_0
zstd conda-forge/linux-ppc64le::zstd-1.5.0-h65c4b1a_0
The following packages will be UPDATED:
certifi pkgs/main::certifi-2020.12.5-py36h6ff~ --> conda-forge::certifi-2021.5.30-py36h270354c_0
Proceed ([y]/n)? y
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Red Hat Enterprise Linux 8.2 (Ootpa)
- Python version: 3.6
- PyArrow version: pyarrow - 5.0.0 - py36h7a46c7e_8_cpu - conda-forge
Any help would be appreciated! I've been struggling on installing datasets on this machine.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3073/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3073/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5211/comments | https://api.github.com/repos/huggingface/datasets/issues/5211/events | https://github.com/huggingface/datasets/pull/5211 | 1,438,544,617 | PR_kwDODunzps5CVgBx | 5,211 | Update Overview.ipynb google colab | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"WDYT @albertvillanova ?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5211). All of your documentation changes will be reflected on that endpoint."
] | "2022-11-07T15:23:52Z" | "2022-11-29T15:59:48Z" | "2022-11-29T15:54:17Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5211.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5211",
"merged_at": "2022-11-29T15:54:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5211.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5211"
} | - removed metrics stuff
- added image example
- added audio example (with ffmpeg instructions)
- updated the "add a new dataset" section | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5211/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5211/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1201/comments | https://api.github.com/repos/huggingface/datasets/issues/1201/events | https://github.com/huggingface/datasets/pull/1201 | 757,927,941 | MDExOlB1bGxSZXF1ZXN0NTMzMTk3OTI2 | 1,201 | adding medical-questions-pairs | {
"avatar_url": "https://avatars.githubusercontent.com/u/46425391?v=4",
"events_url": "https://api.github.com/users/tuner007/events{/privacy}",
"followers_url": "https://api.github.com/users/tuner007/followers",
"following_url": "https://api.github.com/users/tuner007/following{/other_user}",
"gists_url": "https://api.github.com/users/tuner007/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tuner007",
"id": 46425391,
"login": "tuner007",
"node_id": "MDQ6VXNlcjQ2NDI1Mzkx",
"organizations_url": "https://api.github.com/users/tuner007/orgs",
"received_events_url": "https://api.github.com/users/tuner007/received_events",
"repos_url": "https://api.github.com/users/tuner007/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tuner007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuner007/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tuner007"
} | [] | closed | false | null | [] | null | [] | "2020-12-06T13:36:52Z" | "2020-12-06T13:39:44Z" | "2020-12-06T13:39:32Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1201.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1201",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1201.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1201"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1201/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1201/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/5202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5202/comments | https://api.github.com/repos/huggingface/datasets/issues/5202/events | https://github.com/huggingface/datasets/issues/5202 | 1,435,886,090 | I_kwDODunzps5VleIK | 5,202 | CI fails after bulk edit of canonical datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Fixed by: https://huggingface.co/datasets/paws/discussions/1"
] | "2022-11-04T10:51:20Z" | "2023-02-16T09:11:10Z" | "2023-02-16T09:11:10Z" | MEMBER | null | null | null | ```
______ test_get_dataset_config_info[paws-labeled_final-expected_splits2] _______
[gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python
path = 'paws', config_name = 'labeled_final'
expected_splits = ['train', 'test', 'validation']
@pytest.mark.parametrize(
"path, config_name, expected_splits",
[
("squad", "plain_text", ["train", "validation"]),
("dalle-mini/wit", "dalle-mini--wit", ["train"]),
("paws", "labeled_final", ["train", "test", "validation"]),
],
)
def test_get_dataset_config_info(path, config_name, expected_splits):
info = get_dataset_config_info(path, config_name=config_name)
assert info.config_name == config_name
> assert list(info.splits.keys()) == expected_splits
E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation']
E At index 0 diff: 'test' != 'train'
E Full diff:
E - ['train', 'test', 'validation']
E + ['test', 'train', 'validation']
tests/test_inspect.py:45: AssertionError
_ test_get_dataset_info[paws-expected_configs2-expected_splits_in_first_config2] _
[gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python
path = 'paws'
expected_configs = ['labeled_final', 'labeled_swap', 'unlabeled_final']
expected_splits_in_first_config = ['train', 'test', 'validation']
@pytest.mark.parametrize(
"path, expected_configs, expected_splits_in_first_config",
[
("squad", ["plain_text"], ["train", "validation"]),
("dalle-mini/wit", ["dalle-mini--wit"], ["train"]),
("paws", ["labeled_final", "labeled_swap", "unlabeled_final"], ["train", "test", "validation"]),
],
)
def test_get_dataset_info(path, expected_configs, expected_splits_in_first_config):
infos = get_dataset_infos(path)
assert list(infos.keys()) == expected_configs
expected_config = expected_configs[0]
assert expected_config in infos
info = infos[expected_config]
assert info.config_name == expected_config
> assert list(info.splits.keys()) == expected_splits_in_first_config
E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation']
E At index 0 diff: 'test' != 'train'
E Full diff:
E - ['train', 'test', 'validation']
E + ['test', 'train', 'validation']
tests/test_inspect.py:90: AssertionError
______ test_get_dataset_split_names[paws-labeled_final-expected_splits2] _______
[gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python
path = 'paws', expected_config = 'labeled_final'
expected_splits = ['train', 'test', 'validation']
@pytest.mark.parametrize(
"path, expected_config, expected_splits",
[
("squad", "plain_text", ["train", "validation"]),
("dalle-mini/wit", "dalle-mini--wit", ["train"]),
("paws", "labeled_final", ["train", "test", "validation"]),
],
)
def test_get_dataset_split_names(path, expected_config, expected_splits):
infos = get_dataset_infos(path)
assert expected_config in infos
info = infos[expected_config]
assert info.config_name == expected_config
> assert list(info.splits.keys()) == expected_splits
E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation']
E At index 0 diff: 'test' != 'train'
E Full diff:
E - ['train', 'test', 'validation']
E + ['test', 'train', 'validation']
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5202/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5202/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4133/comments | https://api.github.com/repos/huggingface/datasets/issues/4133/events | https://github.com/huggingface/datasets/issues/4133 | 1,197,830,623 | I_kwDODunzps5HZXHf | 4,133 | HANS dataset preview broken | {
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"events_url": "https://api.github.com/users/pietrolesci/events{/privacy}",
"followers_url": "https://api.github.com/users/pietrolesci/followers",
"following_url": "https://api.github.com/users/pietrolesci/following{/other_user}",
"gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pietrolesci",
"id": 61748653,
"login": "pietrolesci",
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"organizations_url": "https://api.github.com/users/pietrolesci/orgs",
"received_events_url": "https://api.github.com/users/pietrolesci/received_events",
"repos_url": "https://api.github.com/users/pietrolesci/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pietrolesci"
} | [
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | [] | null | [
"The dataset cannot be loaded, be it in normal or streaming mode.\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset=datasets.load_dataset(\"hans\", split=\"train\", streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/hans/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac/hans.py\", line 121, in _generate_examples\r\n for idx, line in enumerate(open(filepath, \"rb\")):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1595, in __next__\r\n out = self.readline()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1592, in readline\r\n return self.readuntil(b\"\\n\")\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1581, in readuntil\r\n self.seek(start + found + len(char))\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 676, in seek\r\n raise ValueError(\"Cannot seek streaming HTTP file\")\r\nValueError: Cannot seek streaming HTTP file\r\n>>> dataset=datasets.load_dataset(\"hans\", split=\"train\", streaming=False)\r\nDownloading and preparing dataset hans/plain_text (download: 29.51 MiB, generated: 30.34 MiB, post-processed: Unknown size, total: 59.85 MiB) to /home/slesage/.cache/huggingface/datasets/hans/plain_text/1.0.0/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1687, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1104, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 694, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1087, in _prepare_split\r\n for key, record in logging.tqdm(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/tqdm/std.py\", line 1180, in __iter__\r\n for obj in iterable:\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/hans/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac/hans.py\", line 121, in _generate_examples\r\n for idx, line in enumerate(open(filepath, \"rb\")):\r\nValueError: readline of closed file\r\n```\r\n\r\n",
"Hi! I've opened a PR that should make this dataset stremable. You can test it as follows:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"hans\", split=\"train\", streaming=True, revision=\"49decd29839c792ecc24ac88f861cbdec30c1c40\")\r\n```\r\n\r\n@severo The current script doesn't throw an error in normal mode (only in streaming mode) on my local machine or in Colab. Can you update your installation of `datasets` and see if that fixes the issue?",
"Thanks for this. It works well, thanks! The dataset viewer is using https://github.com/huggingface/datasets/releases/tag/2.0.0, I'm eager to upgrade to 2.0.1 😉"
] | "2022-04-08T21:06:15Z" | "2022-04-13T11:57:34Z" | "2022-04-13T11:57:34Z" | NONE | null | null | null | ## Dataset viewer issue for '*hans*'
**Link:** [https://huggingface.co/datasets/hans](https://huggingface.co/datasets/hans)
HANS dataset preview is broken with error 400
Am I the one who added this dataset ? No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4133/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4133/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6291/comments | https://api.github.com/repos/huggingface/datasets/issues/6291/events | https://github.com/huggingface/datasets/issues/6291 | 1,936,129,871 | I_kwDODunzps5zZv9P | 6,291 | Casting type from Array2D int to Array2D float crashes | {
"avatar_url": "https://avatars.githubusercontent.com/u/22567306?v=4",
"events_url": "https://api.github.com/users/AlanBlanchet/events{/privacy}",
"followers_url": "https://api.github.com/users/AlanBlanchet/followers",
"following_url": "https://api.github.com/users/AlanBlanchet/following{/other_user}",
"gists_url": "https://api.github.com/users/AlanBlanchet/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AlanBlanchet",
"id": 22567306,
"login": "AlanBlanchet",
"node_id": "MDQ6VXNlcjIyNTY3MzA2",
"organizations_url": "https://api.github.com/users/AlanBlanchet/orgs",
"received_events_url": "https://api.github.com/users/AlanBlanchet/received_events",
"repos_url": "https://api.github.com/users/AlanBlanchet/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AlanBlanchet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlanBlanchet/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AlanBlanchet"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting! I've opened a PR with a fix"
] | "2023-10-10T20:10:10Z" | "2023-10-13T13:45:31Z" | "2023-10-13T13:45:31Z" | NONE | null | null | null | ### Describe the bug
I am on a school project and the initial type for feature annotations are `Array2D(shape=(None, 4))`. I am trying to cast this type to a `float64` and pyarrow gives me this error :
```
Traceback (most recent call last):
File "/home/alan/dev/ClassezDesImagesAvecDesAlgorithmesDeDeeplearning/src/sdd/data/dataset.py", line 141, in <module>
dataset = StanfordDogsDataset(size, 5).original(True).demo()
File "<attrs generated init __main__.StanfordDogsDataset>", line 4, in __init__
File "/home/alan/dev/ClassezDesImagesAvecDesAlgorithmesDeDeeplearning/src/sdd/data/dataset.py", line 33, in __attrs_post_init__
self.dataset = self.dataset.cast_column(
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/fingerprint.py", line 511, in wrapper
out = func(dataset, *args, **kwargs)
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2110, in cast_column
return self.cast(features)
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2055, in cast
dataset = dataset.map(
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3097, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3474, in _map_single
batch = apply_function_on_filtered_inputs(
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3353, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/table.py", line 2328, in table_cast
return cast_table_to_schema(table, schema)
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/table.py", line 2287, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/table.py", line 2287, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1831, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1831, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/table.py", line 2143, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper
return func(array, *args, **kwargs)
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1967, in array_cast
return pa_type.wrap_array(array)
File "pyarrow/types.pxi", line 1369, in pyarrow.lib.BaseExtensionType.wrap_array
TypeError: Incompatible storage type for extension<arrow.py_extension_type<Array2DExtensionType>>: expected list<item: list<item: double>>, got list<item: list<item: int32>>
```
### Steps to reproduce the bug
```python
dataset = datasets.load_dataset("Alanox/stanford-dogs", split="full")
dataset = dataset.cast_column("annotations", Array2D((None, 4), "float64"))
```
### Expected behavior
It should simply cast the column feature type to a `float64` without error
### Environment info
datasets == 2.14.5 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6291/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6291/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6165/comments | https://api.github.com/repos/huggingface/datasets/issues/6165/events | https://github.com/huggingface/datasets/pull/6165 | 1,861,124,284 | PR_kwDODunzps5YexBL | 6,165 | Fix multiprocessing with spawn in iterable datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}",
"followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers",
"following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}",
"gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Hubert-Bonisseur",
"id": 48770768,
"login": "Hubert-Bonisseur",
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs",
"received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events",
"repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Hubert-Bonisseur"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq \r\nA test is failing, but I don't think it is due to my changes",
"Good catch ! Could you add a test to make sure transformed IterableDataset objects are still picklable ?\r\n\r\nSomething like `test_pickle_after_many_transforms` in in `test_iterable_dataset.py` that does a bunch or rename, map, take on a dataset and checks that the dataset can be pickled at the end and the reloaded dataset returns the same elements",
"@lhoestq \r\nI added the test and fixed one last method",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006537 / 0.011353 (-0.004816) | 0.003960 / 0.011008 (-0.007048) | 0.085135 / 0.038508 (0.046627) | 0.079271 / 0.023109 (0.056162) | 0.383743 / 0.275898 (0.107845) | 0.414622 / 0.323480 (0.091143) | 0.004202 / 0.007986 (-0.003784) | 0.003537 / 0.004328 (-0.000791) | 0.065758 / 0.004250 (0.061508) | 0.054225 / 0.037052 (0.017173) | 0.395715 / 0.258489 (0.137226) | 0.438985 / 0.293841 (0.145144) | 0.030590 / 0.128546 (-0.097956) | 0.008754 / 0.075646 (-0.066892) | 0.288415 / 0.419271 (-0.130857) | 0.051863 / 0.043533 (0.008330) | 0.382501 / 0.255139 (0.127363) | 0.414428 / 0.283200 (0.131228) | 0.024084 / 0.141683 (-0.117599) | 1.478726 / 1.452155 (0.026572) | 1.544763 / 1.492716 (0.052047) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.285143 / 0.018006 (0.267136) | 0.603859 / 0.000490 (0.603369) | 0.004330 / 0.000200 (0.004131) | 0.000108 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027856 / 0.037411 (-0.009555) | 0.081963 / 0.014526 (0.067437) | 0.104106 / 0.176557 (-0.072451) | 0.151378 / 0.737135 (-0.585757) | 0.096476 / 0.296338 (-0.199862) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402938 / 0.215209 (0.187729) | 4.042312 / 2.077655 (1.964657) | 2.068421 / 1.504120 (0.564301) | 1.877870 / 1.541195 (0.336675) | 1.947643 / 1.468490 (0.479153) | 0.482031 / 4.584777 (-4.102746) | 3.554747 / 3.745712 (-0.190965) | 3.307811 / 5.269862 (-1.962050) | 2.082886 / 4.565676 (-2.482791) | 0.056853 / 0.424275 (-0.367422) | 0.007535 / 0.007607 (-0.000072) | 0.483694 / 0.226044 (0.257649) | 4.827906 / 2.268929 (2.558978) | 2.567572 / 55.444624 (-52.877052) | 2.167206 / 6.876477 (-4.709271) | 2.414442 / 2.142072 (0.272369) | 0.579472 / 4.805227 (-4.225755) | 0.132976 / 6.500664 (-6.367688) | 0.059315 / 0.075469 (-0.016154) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260086 / 1.841788 (-0.581702) | 19.438297 / 8.074308 (11.363989) | 14.188161 / 10.191392 (3.996769) | 0.168534 / 0.680424 (-0.511890) | 0.018070 / 0.534201 (-0.516131) | 0.394241 / 0.579283 (-0.185043) | 0.411057 / 0.434364 (-0.023307) | 0.461123 / 0.540337 (-0.079215) | 0.626844 / 1.386936 (-0.760092) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006896 / 0.011353 (-0.004457) | 0.004207 / 0.011008 (-0.006801) | 0.064981 / 0.038508 (0.026473) | 0.080261 / 0.023109 (0.057152) | 0.399403 / 0.275898 (0.123505) | 0.433099 / 0.323480 (0.109619) | 0.005697 / 0.007986 (-0.002288) | 0.003601 / 0.004328 (-0.000728) | 0.065924 / 0.004250 (0.061673) | 0.058868 / 0.037052 (0.021815) | 0.403705 / 0.258489 (0.145216) | 0.439218 / 0.293841 (0.145377) | 0.032789 / 0.128546 (-0.095757) | 0.008675 / 0.075646 (-0.066971) | 0.071217 / 0.419271 (-0.348055) | 0.048487 / 0.043533 (0.004954) | 0.399878 / 0.255139 (0.144739) | 0.412816 / 0.283200 (0.129616) | 0.023905 / 0.141683 (-0.117778) | 1.541402 / 1.452155 (0.089247) | 1.588080 / 1.492716 (0.095364) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.322863 / 0.018006 (0.304856) | 0.530291 / 0.000490 (0.529802) | 0.004862 / 0.000200 (0.004662) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032697 / 0.037411 (-0.004715) | 0.092416 / 0.014526 (0.077891) | 0.107355 / 0.176557 (-0.069201) | 0.160217 / 0.737135 (-0.576918) | 0.109286 / 0.296338 (-0.187052) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437375 / 0.215209 (0.222166) | 4.362644 / 2.077655 (2.284990) | 2.335404 / 1.504120 (0.831284) | 2.173215 / 1.541195 (0.632020) | 2.254061 / 1.468490 (0.785571) | 0.493906 / 4.584777 (-4.090871) | 3.609025 / 3.745712 (-0.136687) | 3.352380 / 5.269862 (-1.917481) | 2.074185 / 4.565676 (-2.491492) | 0.057863 / 0.424275 (-0.366412) | 0.007297 / 0.007607 (-0.000310) | 0.512464 / 0.226044 (0.286420) | 5.135921 / 2.268929 (2.866993) | 2.788889 / 55.444624 (-52.655736) | 2.479097 / 6.876477 (-4.397379) | 2.717848 / 2.142072 (0.575776) | 0.590442 / 4.805227 (-4.214785) | 0.133721 / 6.500664 (-6.366943) | 0.061491 / 0.075469 (-0.013978) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.429564 / 1.841788 (-0.412224) | 20.628733 / 8.074308 (12.554425) | 15.299571 / 10.191392 (5.108179) | 0.171032 / 0.680424 (-0.509392) | 0.019995 / 0.534201 (-0.514206) | 0.401283 / 0.579283 (-0.178000) | 0.416504 / 0.434364 (-0.017860) | 0.471219 / 0.540337 (-0.069118) | 0.641299 / 1.386936 (-0.745637) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5503e7beb5a31926aec03c6c9b24813f9f83cd7b \"CML watermark\")\n"
] | "2023-08-22T10:07:23Z" | "2023-08-29T13:27:14Z" | "2023-08-29T13:18:11Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6165.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6165",
"merged_at": "2023-08-29T13:18:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6165.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6165"
} | The "Spawn" method is preferred when multiprocessing on macOS or Windows systems, instead of the "Fork" method on linux systems.
This causes some methods of Iterable Datasets to break when using a dataloader with more than 0 workers.
I fixed the issue by replacing lambda and local methods which are not pickle-able.
See the example below:
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
if __name__ == "__main__":
dataset = load_dataset("lhoestq/demo1", split="train")
dataset = dataset.to_iterable_dataset(num_shards=3)
dataset = dataset.remove_columns(["package_name"])
dataset = dataset.rename_columns({
"review": "review1"
})
dataset = dataset.rename_column("date", "date1")
for sample in DataLoader(dataset, batch_size=None, num_workers=3):
print(sample)
```
To notice the fix on a linux system, adding these lines should do the trick:
```python
import multiprocessing
multiprocessing.set_start_method('spawn')
```
I also removed what looks like code duplication between rename_colums and rename_column
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6165/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6165/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2656 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2656/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2656/comments | https://api.github.com/repos/huggingface/datasets/issues/2656/events | https://github.com/huggingface/datasets/pull/2656 | 945,421,790 | MDExOlB1bGxSZXF1ZXN0NjkwNzUzNjA3 | 2,656 | Change `from_csv` default arguments | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
} | [] | closed | false | null | [] | null | [
"This is not the default in pandas right ?\r\nWe try to align our CSV loader with the pandas API.\r\n\r\nMoreover according to their documentation, the python parser is used when sep is None, which might not be the fastest one.\r\n\r\nMaybe users could just specify `sep=None` themselves ?\r\nIn this case we should add some documentation about this"
] | "2021-07-15T14:09:06Z" | "2023-09-24T09:56:44Z" | "2021-07-16T10:23:26Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2656.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2656",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2656.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2656"
} | Passing `sep=None` to pandas's `read_csv` lets pandas guess the CSV file's separator
This PR allows users to use this pandas's feature by passing `sep=None` to `Dataset.from_csv`:
```python
Dataset.from_csv(
...,
sep=None
)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2656/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2656/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3549 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3549/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3549/comments | https://api.github.com/repos/huggingface/datasets/issues/3549/events | https://github.com/huggingface/datasets/pull/3549 | 1,096,426,996 | PR_kwDODunzps4wqkGt | 3,549 | Fix sem_eval_2018_task_1 download location | {
"avatar_url": "https://avatars.githubusercontent.com/u/31095360?v=4",
"events_url": "https://api.github.com/users/maxpel/events{/privacy}",
"followers_url": "https://api.github.com/users/maxpel/followers",
"following_url": "https://api.github.com/users/maxpel/following{/other_user}",
"gists_url": "https://api.github.com/users/maxpel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/maxpel",
"id": 31095360,
"login": "maxpel",
"node_id": "MDQ6VXNlcjMxMDk1MzYw",
"organizations_url": "https://api.github.com/users/maxpel/orgs",
"received_events_url": "https://api.github.com/users/maxpel/received_events",
"repos_url": "https://api.github.com/users/maxpel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/maxpel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxpel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/maxpel"
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for pushing this :)\r\n\r\nIt seems that you created this PR from an old version of `datasets` that didn't have the sem_eval_2018_task_1.py file.\r\n\r\nCan you try merging `master` into your branch ? Or re-create your PR from a branch that comes from a more recent version of `datasets` ?\r\n\r\nAnd sorry for the late response !",
"Hi! No problem! I made the new branch like you said and opened https://github.com/huggingface/datasets/pull/3643 for it. I will close this one."
] | "2022-01-07T15:37:52Z" | "2022-01-27T15:52:03Z" | "2022-01-27T15:52:03Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3549.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3549",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3549.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3549"
} | This changes the download location of sem_eval_2018_task_1 files to include the test set labels as discussed in https://github.com/huggingface/datasets/issues/2745#issuecomment-954588500_ with @lhoestq. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3549/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3549/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4068/comments | https://api.github.com/repos/huggingface/datasets/issues/4068/events | https://github.com/huggingface/datasets/pull/4068 | 1,186,765,422 | PR_kwDODunzps41VC0I | 4,068 | Improve out of bounds error message | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-03-30T17:22:10Z" | "2022-03-31T08:39:08Z" | "2022-03-31T08:33:57Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4068.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4068",
"merged_at": "2022-03-31T08:33:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4068.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4068"
} | In 1.18.4 with https://github.com/huggingface/datasets/pull/3719 we introduced an error message for users using `select` with out of bounds indices. The message ended up being confusing for some users because it mentioned negative indices, which is not the main use case.
I replaced it with a message that is very similar to the one you get with you try to access a list with an out-of-range index. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4068/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4068/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1247/comments | https://api.github.com/repos/huggingface/datasets/issues/1247/events | https://github.com/huggingface/datasets/pull/1247 | 758,431,640 | MDExOlB1bGxSZXF1ZXN0NTMzNjA1NzE2 | 1,247 | Adding indonlu dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/6518504?v=4",
"events_url": "https://api.github.com/users/yasirabd/events{/privacy}",
"followers_url": "https://api.github.com/users/yasirabd/followers",
"following_url": "https://api.github.com/users/yasirabd/following{/other_user}",
"gists_url": "https://api.github.com/users/yasirabd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yasirabd",
"id": 6518504,
"login": "yasirabd",
"node_id": "MDQ6VXNlcjY1MTg1MDQ=",
"organizations_url": "https://api.github.com/users/yasirabd/orgs",
"received_events_url": "https://api.github.com/users/yasirabd/received_events",
"repos_url": "https://api.github.com/users/yasirabd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yasirabd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yasirabd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yasirabd"
} | [] | closed | false | null | [] | null | [
"looks like this PR includes changes about many files other than the ones for IndoNLU\r\nCould you create another branch and another PR please ?",
"> looks like this PR includes changes about many files other than the ones for IndoNLU\r\n> Could you create another branch and another PR please ?\r\n\r\nOkay I'll make it"
] | "2020-12-07T11:38:45Z" | "2020-12-08T14:11:50Z" | "2020-12-08T14:11:50Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1247.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1247",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1247.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1247"
} | IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for Bahasa Indonesia. It contains 12 datasets. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1247/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1247/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1388/comments | https://api.github.com/repos/huggingface/datasets/issues/1388/events | https://github.com/huggingface/datasets/pull/1388 | 760,373,136 | MDExOlB1bGxSZXF1ZXN0NTM1MjE1Nzk2 | 1,388 | hind_encorp | {
"avatar_url": "https://avatars.githubusercontent.com/u/56379013?v=4",
"events_url": "https://api.github.com/users/rahul-art/events{/privacy}",
"followers_url": "https://api.github.com/users/rahul-art/followers",
"following_url": "https://api.github.com/users/rahul-art/following{/other_user}",
"gists_url": "https://api.github.com/users/rahul-art/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rahul-art",
"id": 56379013,
"login": "rahul-art",
"node_id": "MDQ6VXNlcjU2Mzc5MDEz",
"organizations_url": "https://api.github.com/users/rahul-art/orgs",
"received_events_url": "https://api.github.com/users/rahul-art/received_events",
"repos_url": "https://api.github.com/users/rahul-art/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rahul-art/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rahul-art/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rahul-art"
} | [] | closed | false | null | [] | null | [] | "2020-12-09T14:22:59Z" | "2020-12-09T14:46:51Z" | "2020-12-09T14:46:37Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1388.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1388",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1388.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1388"
} | resubmit of hind_encorp file changes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1388/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1388/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2206/comments | https://api.github.com/repos/huggingface/datasets/issues/2206/events | https://github.com/huggingface/datasets/issues/2206 | 855,252,415 | MDU6SXNzdWU4NTUyNTI0MTU= | 2,206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | {
"avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4",
"events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}",
"followers_url": "https://api.github.com/users/yana-xuyan/followers",
"following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}",
"gists_url": "https://api.github.com/users/yana-xuyan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yana-xuyan",
"id": 38536635,
"login": "yana-xuyan",
"node_id": "MDQ6VXNlcjM4NTM2NjM1",
"organizations_url": "https://api.github.com/users/yana-xuyan/orgs",
"received_events_url": "https://api.github.com/users/yana-xuyan/received_events",
"repos_url": "https://api.github.com/users/yana-xuyan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yana-xuyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yana-xuyan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yana-xuyan"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\nthe output of the tokenizers is treated specially in the lib to optimize the dataset size (see the code [here](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L138-L141)). It looks like that one of the values in a dictionary returned by the tokenizer is out of the assumed range.\r\nCan you please provide a minimal reproducible example for more help?",
"Hi @yana-xuyan, thanks for reporting.\r\n\r\nAs clearly @mariosasko explained, `datasets` performs some optimizations in order to reduce the size of the dataset cache files. And one of them is storing the field `special_tokens_mask` as `int8`, which means that this field can only contain integers between `-128` to `127`. As your message error states, one of the values of this field is `50259`, and therefore it cannot be stored as an `int8`.\r\n\r\nMaybe we could implement a way to disable this optimization and allow using any integer value; although the size of the cache files would be much larger.",
"I'm facing same issue @mariosasko @albertvillanova \r\n\r\n```\r\nArrowInvalid: Integer value 50260 not in range: -128 to 127\r\n```\r\n\r\nTo reproduce:\r\n```python\r\nSPECIAL_TOKENS = ['<bos>','<eos>','<speaker1>','<speaker2>','<pad>']\r\nATTR_TO_SPECIAL_TOKEN = {\r\n 'bos_token': '<bos>', \r\n 'eos_token': '<eos>', \r\n 'pad_token': '<pad>',\r\n 'additional_special_tokens': ['<speaker1>', '<speaker2>']\r\n }\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\", use_fast=False)\r\nnum_added_tokens =tokenizer.add_special_tokens(ATTR_TO_SPECIAL_TOKEN)\r\nvocab_size = len(self.tokenizer.encoder) + num_added_tokens\r\nvocab =tokenizer.get_vocab()\r\n\r\npad_index = tokenizer.pad_token_id\r\neos_index = tokenizer.eos_token_id\r\nbos_index = tokenizer.bos_token_id\r\nspeaker1_index = vocab[\"<speaker1>\"]\r\nspeaker2_index = vocab[\"<speaker2>\"]\r\n```\r\n\r\n```python\r\ntokenizer.decode(['50260'])\r\n'<speaker1>'\r\n```",
"@mariosasko \r\nI am hitting this bug in the Bert tokenizer too. I see that @albertvillanova labeled this as a bug back in April. Has there been a fix released yet?\r\nWhat I did for now is to just disable the optimization in the HF library. @yana-xuyan and @thomas-happify, is that what you did and did that work for you?\r\n\r\n",
"Hi @gregg-ADP, \r\n\r\nThis is still a bug.\r\n\r\nAs @albertvillanova has suggested, maybe it's indeed worth adding a variable to `config.py` to have a way to disable this behavior.\r\n\r\nIn the meantime, this forced optimization can be disabled by specifying `features` (of the returned examples) in the `map` call:\r\n```python\r\nfrom datasets import *\r\n... # dataset init\r\nds.map(process_example, features=Features({\"special_tokens_mask\": Sequence(Value(\"int32\")), ... rest of the features}) \r\n```\r\n\r\ncc @lhoestq so he is also aware of this issue",
"Thanks for the quick reply @mariosasko. What I did was to changed the optimizer to use int32 instead of int8. \r\nWhat you're suggesting specifies the type for each feature explicitly without changing the HF code. This is definitely a better option. However, we are hitting a new error later:\r\n```\r\n File \"/Users/ccccc/PycharmProjects/aaaa-ml/venv-source/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\nTypeError: forward() got an unexpected keyword argument 'pos'\r\n\r\n```\r\nWhere 'pos' is the name of a new feature we added. Do you agree that your way of fixing the optimizer issue will not fix our new issue? If not, I will continue with this optimizer fix until we resolve our other issue.\r\n",
"Hi @gwc4github,\r\n\r\nthe fix was merged a few minutes ago, and it doesn't require any changes on the user side (e.g. no need for specifying `features`). If you find time, feel free to install `datasets` from master with:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```\r\nand let us know if it works for your use case! "
] | "2021-04-11T08:40:09Z" | "2021-11-10T12:18:30Z" | "2021-11-10T12:04:28Z" | NONE | null | null | null | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_single
writer.write(example)
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 296, in write
self.write_on_file()
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 270, in write_on_file
pa_array = pa.array(typed_sequence)
File "pyarrow/array.pxi", line 222, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 108, in __arrow_array__
out = out.cast(pa.list_(self.optimized_int_type))
File "pyarrow/array.pxi", line 810, in pyarrow.lib.Array.cast
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/pyarrow/compute.py", line 281, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 465, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 294, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Integer value 50259 not in range: -128 to 127
Do you have any idea about it? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2206/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2206/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1671 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1671/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1671/comments | https://api.github.com/repos/huggingface/datasets/issues/1671/events | https://github.com/huggingface/datasets/issues/1671 | 776,652,193 | MDU6SXNzdWU3NzY2NTIxOTM= | 1,671 | connection issue | {
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rabeehkarimimahabadi",
"id": 73364383,
"login": "rabeehkarimimahabadi",
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rabeehkarimimahabadi"
} | [] | closed | false | null | [] | null | [
"Also, mayjor issue for me is the format issue, even if I go through changing the whole code to use load_from_disk, then if I do \r\n\r\nd = datasets.load_from_disk(\"imdb\")\r\nd = d[\"train\"][:10] => the format of this is no more in datasets format\r\nthis is different from you call load_datasets(\"train[10]\")\r\n\r\ncould you tell me how I can make the two datastes the same format @lhoestq \r\n\r\n",
"> `\r\nrequests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/glue/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7ff6d6c60a20>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)'))`\r\n\r\nDo you have an internet connection on the machine ? Is there a proxy that might block requests to aws ?\r\n\r\n> I tried to do read the data, save it to a path and then set HF_HOME, which does not work and this is still not reading from the old set path, could you assist me how to save the datasets in a path, and let dataset library read from this path to avoid connection issue. thanks\r\n\r\nHF_HOME is used to specify the directory for the cache files of this library.\r\nYou can use save_to_disk and load_from_disk without changing the HF_HOME:\r\n```python\r\nimdb = datasets.load_dataset(\"imdb\")\r\nimdb.save_to_disk(\"/idiap/temp/rkarimi/hf_datasets/imdb\")\r\nimdb = datasets.load_from_disk(\"/idiap/temp/rkarimi/hf_datasets/imdb\")\r\n```\r\n\r\n> could you tell me how I can make the two datastes the same format\r\n\r\nIndeed they returns different things:\r\n- `load_dataset` returns a `Dataset` object if the split is specified, or a `DatasetDict` if no split is given. Therefore `load_datasets(\"imdb\", split=\"train[10]\")` returns a `Dataset` object containing 10 elements.\r\n- doing `d[\"train\"][:10]` on a DatasetDict \"d\" gets the train split `d[\"train\"]` as a `Dataset` object and then gets the first 10 elements as a dictionary"
] | "2020-12-30T21:56:20Z" | "2022-10-05T12:42:12Z" | "2022-10-05T12:42:12Z" | NONE | null | null | null | Hi
I am getting this connection issue, resulting in large failure on cloud, @lhoestq I appreciate your help on this.
If I want to keep the codes the same, so not using save_to_disk, load_from_disk, but save the datastes in the way load_dataset reads from and copy the files in the same folder the datasets library reads from, could you assist me how this can be done, thanks
I tried to do read the data, save it to a path and then set HF_HOME, which does not work and this is still not reading from the old set path, could you assist me how to save the datasets in a path, and let dataset library read from this path to avoid connection issue. thanks
```
imdb = datasets.load_dataset("imdb")
imdb.save_to_disk("/idiap/temp/rkarimi/hf_datasets/imdb")
>>> os.environ["HF_HOME"]="/idiap/temp/rkarimi/hf_datasets/"
>>> imdb = datasets.load_dataset("imdb")
Reusing dataset imdb (/idiap/temp/rkarimi/cache_home_2/datasets/imdb/plain_text/1.0.0/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3)
```
I tried afterwards to set HF_HOME in bash, this makes it read from it, but it cannot let dataset library load from the saved path and still downloading data. could you tell me how to fix this issue @lhoestq thanks
Also this is on cloud, so I save them in a path, copy it to "another machine" to load the data
### Error stack
```
Traceback (most recent call last):
File "./finetune_t5_trainer.py", line 344, in <module>
main()
File "./finetune_t5_trainer.py", line 232, in main
for task in data_args.eval_tasks} if training_args.do_test else None
File "./finetune_t5_trainer.py", line 232, in <dictcomp>
for task in data_args.eval_tasks} if training_args.do_test else None
File "/workdir/seq2seq/data/tasks.py", line 136, in get_dataset
split = self.get_sampled_split(split, n_obs)
File "/workdir/seq2seq/data/tasks.py", line 64, in get_sampled_split
dataset = self.load_dataset(split)
File "/workdir/seq2seq/data/tasks.py", line 454, in load_dataset
split=split, script_version="master")
File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 263, in prepare_module
head_hf_s3(path, filename=name, dataset=dataset)
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 200, in head_hf_s3
return http_head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset))
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 403, in http_head
url, proxies=proxies, headers=headers, cookies=cookies, allow_redirects=allow_redirects, timeout=timeout
File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 104, in head
return request('head', url, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/requests/adapters.py", line 504, in send
raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/glue/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7ff6d6c60a20>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)'))
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1671/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1671/timeline | null | completed | false |