url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/737 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/737/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/737/comments | https://api.github.com/repos/huggingface/datasets/issues/737/events | https://github.com/huggingface/datasets/issues/737 | 722,463,923 | MDU6SXNzdWU3MjI0NjM5MjM= | 737 | Trec Dataset Connection Error | [] | closed | false | null | 1 | 2020-10-15T15:57:53Z | 2020-10-19T08:54:36Z | 2020-10-19T08:54:36Z | null | **Datasets Version:**
1.1.2
**Python Version:**
3.6/3.7
**Code:**
```python
from datasets import load_dataset
load_dataset("trec")
```
**Expected behavior:**
Download Trec dataset and load Dataset object
**Current Behavior:**
Get a connection error saying it couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label (but the link doesn't seem broken)
<details>
<summary>Error Logs</summary>
Using custom data configuration default
Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /root/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7...
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-8-66bf1242096e> in <module>()
----> 1 load_dataset("trec")
10 frames
/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag)
473 elif response is not None and response.status_code == 404:
474 raise FileNotFoundError("Couldn't find file at {}".format(url))
--> 475 raise ConnectionError("Couldn't reach {}".format(url))
476
477 # Try a second time
ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label
</details> | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/737/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/737/timeline | null | completed | null | null | false | [
"Thanks for reporting.\r\nThat's because the download url has changed. The old url now redirects to the new one but we don't support redirection for downloads.\r\n\r\nI'm opening a PR to update the url"
] |
https://api.github.com/repos/huggingface/datasets/issues/2798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2798/comments | https://api.github.com/repos/huggingface/datasets/issues/2798/events | https://github.com/huggingface/datasets/pull/2798 | 970,493,126 | MDExOlB1bGxSZXF1ZXN0NzEyNDM3ODc2 | 2,798 | Fix streaming zip files | [] | closed | false | null | 2 | 2021-08-13T15:17:01Z | 2021-08-16T14:16:50Z | 2021-08-13T15:38:28Z | null | Currently, streaming remote zip data files gives `FileNotFoundError` message:
```python
data_files = f"https://huggingface.co/datasets/albertvillanova/datasets-tests-compression/resolve/main/sample.zip"
ds = load_dataset("json", split="train", data_files=data_files, streaming=True)
next(iter(ds))
```
This PR fixes it by adding a glob string.
The corresponding test is implemented in PR #2786. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2798/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2798/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2798.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2798",
"merged_at": "2021-08-13T15:38:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2798.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2798"
} | true | [
"Hi ! I don't fully understand this change @albertvillanova \r\nThe `_extract` method used to return the compound URL that points to the root of the inside of the archive.\r\nThis way users can use the usual os.path.join or other functions to point to the relevant files. I don't see why you're using a glob pattern ?",
"This change is to allow this:\r\n```python\r\ndata_files = f\"https://huggingface.co/datasets/albertvillanova/datasets-tests-compression/resolve/main/sample.zip\"\r\nds = load_dataset(\"json\", split=\"train\", data_files=data_files, streaming=True)\r\nassert isinstance(ds, IterableDataset)\r\n```\r\nNote that in this case the user will not call os.path.join.\r\n\r\nBefore this PR it gave error because pointing to the root, without any subsequent join, gives error:\r\n```python\r\nfsspec.open(\"zip://::https://huggingface.co/datasets/albertvillanova/datasets-tests-compression/resolve/main/sample.zip\")\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/3639 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3639/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3639/comments | https://api.github.com/repos/huggingface/datasets/issues/3639/events | https://github.com/huggingface/datasets/issues/3639 | 1,116,021,420 | I_kwDODunzps5ChSKs | 3,639 | same value of precision, recall, f1 score at each epoch for classification task. | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-01-27T10:14:16Z | 2022-02-24T09:02:18Z | 2022-02-24T09:02:17Z | null | **1st Epoch:**
1/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/f1/default/default_experiment-1-0.arrow.59it/s]
01/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/precision/default/default_experiment-1-0.arrow
01/27/2022 09:30:49 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/recall/default/default_experiment-1-0.arrow
PRECISION: {'precision': 0.7612903225806451}
RECALL: {'recall': 0.7612903225806451}
F1: {'f1': 0.7612903225806451}
{'eval_loss': 1.4658324718475342, 'eval_accuracy': 0.7612903118133545, 'eval_runtime': 30.0054, 'eval_samples_per_second': 46.492, 'eval_steps_per_second': 46.492, 'epoch': 3.0}
**4th Epoch:**
1/27/2022 09:56:55 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/f1/default/default_experiment-1-0.arrow.92it/s]
01/27/2022 09:56:56 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/precision/default/default_experiment-1-0.arrow
01/27/2022 09:56:56 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/recall/default/default_experiment-1-0.arrow
PRECISION: {'precision': 0.7698924731182796}
RECALL: {'recall': 0.7698924731182796}
F1: {'f1': 0.7698924731182796}
## Environment info
!git clone https://github.com/huggingface/transformers
%cd transformers
!pip install .
!pip install -r /content/transformers/examples/pytorch/token-classification/requirements.txt
!pip install datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3639/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3639/timeline | null | completed | null | null | false | [
"Hi @Dhanachandra, \r\n\r\nWe have tests for all our metrics and they work as expected: under the hood, we use scikit-learn implementations.\r\n\r\nMaybe the cause is somewhere else. For example:\r\n- Is it a binary or a multiclass or a multilabel classification? Default computation of these metrics is for binary classification; if you would like multiclass or multilabel, you should pass the corresponding parameters; see their documentation (e.g.: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html) or code below:\r\n\r\nhttps://huggingface.co/docs/datasets/using_metrics.html#computing-the-metric-scores\r\n\r\n```python\r\nIn [1]: from datasets import load_metric\r\n\r\nIn [2]: precision = load_metric(\"precision\")\r\n\r\nIn [3]: print(precision.inputs_description)\r\n\r\nArgs:\r\n predictions: Predicted labels, as returned by a model.\r\n references: Ground truth labels.\r\n labels: The set of labels to include when average != 'binary', and\r\n their order if average is None. Labels present in the data can\r\n be excluded, for example to calculate a multiclass average ignoring\r\n a majority negative class, while labels not present in the data will\r\n result in 0 components in a macro average. For multilabel targets,\r\n labels are column indices. By default, all labels in y_true and\r\n y_pred are used in sorted order.\r\n average: This parameter is required for multiclass/multilabel targets.\r\n If None, the scores for each class are returned. Otherwise, this\r\n determines the type of averaging performed on the data:\r\n binary: Only report results for the class specified by pos_label.\r\n This is applicable only if targets (y_{true,pred}) are binary.\r\n micro: Calculate metrics globally by counting the total true positives,\r\n false negatives and false positives.\r\n macro: Calculate metrics for each label, and find their unweighted mean.\r\n This does not take label imbalance into account.\r\n weighted: Calculate metrics for each label, and find their average\r\n weighted by support (the number of true instances for each label).\r\n This alters ‘macro’ to account for label imbalance; it can result\r\n in an F-score that is not between precision and recall.\r\n samples: Calculate metrics for each instance, and find their average\r\n (only meaningful for multilabel classification).\r\n sample_weight: Sample weights.\r\n\r\nReturns:\r\n precision: Precision score.\r\n\r\nExamples:\r\n\r\n >>> precision_metric = datasets.load_metric(\"precision\")\r\n >>> results = precision_metric.compute(references=[0, 1], predictions=[0, 1])\r\n >>> print(results)\r\n {'precision': 1.0}\r\n\r\n >>> predictions = [0, 2, 1, 0, 0, 1]\r\n >>> references = [0, 1, 2, 0, 1, 2]\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average='macro')\r\n >>> print(results)\r\n {'precision': 0.2222222222222222}\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average='micro')\r\n >>> print(results)\r\n {'precision': 0.3333333333333333}\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average='weighted')\r\n >>> print(results)\r\n {'precision': 0.2222222222222222}\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average=None)\r\n >>> print(results)\r\n {'precision': array([0.66666667, 0. , 0. ])}\r\n```\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4973 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4973/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4973/comments | https://api.github.com/repos/huggingface/datasets/issues/4973/events | https://github.com/huggingface/datasets/pull/4973 | 1,371,600,074 | PR_kwDODunzps4-33JW | 4,973 | [GH->HF] Load datasets from the Hub | [] | closed | false | null | 2 | 2022-09-13T15:01:41Z | 2022-09-15T15:26:51Z | 2022-09-15T15:24:26Z | null | Currently datasets with no namespace (e.g. squad, glue) are loaded from github.
In this PR I changed this logic to use the Hugging Face Hub instead.
This is the first step in removing all the dataset scripts in this repository
related to discussions in https://github.com/huggingface/datasets/pull/4059 (I should have continued from this PR actually) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4973/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4973/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/4973.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4973",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4973.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4973"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Duplicate of:\r\n- #4059"
] |
https://api.github.com/repos/huggingface/datasets/issues/5734 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5734/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5734/comments | https://api.github.com/repos/huggingface/datasets/issues/5734/events | https://github.com/huggingface/datasets/issues/5734 | 1,662,058,028 | I_kwDODunzps5jEP4s | 5,734 | Remove temporary pin of fsspec | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2023-04-11T09:04:17Z | 2023-04-11T11:04:52Z | 2023-04-11T11:04:52Z | null | Once root cause is found and fixed, remove the temporary pin introduced by:
- #5731 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5734/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5734/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1718 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1718/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1718/comments | https://api.github.com/repos/huggingface/datasets/issues/1718/events | https://github.com/huggingface/datasets/issues/1718 | 783,474,753 | MDU6SXNzdWU3ODM0NzQ3NTM= | 1,718 | Possible cache miss in datasets | [] | closed | false | null | 18 | 2021-01-11T15:37:31Z | 2022-06-29T14:54:42Z | 2021-01-26T02:47:59Z | null | Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function always recomputes instead of loading from cache.
Is this a bug or am I doing something wrong?
Is there a way for fix this and avoid all the recomputation?
Thanks
Edit:
transformers==3.5.1
datasets==1.2.0
```
from datasets import load_dataset
from transformers import AutoTokenizer
datasets = load_dataset('wikitext', 'wikitext-103-raw-v1')
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=True)
column_names = datasets["train"].column_names
text_column_name = "text" if "text" in column_names else column_names[0]
def tokenize_function(examples):
return tokenizer(examples[text_column_name], return_special_tokens_mask=True)
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=60,
remove_columns=[text_column_name],
load_from_cache_file=True,
)
max_seq_length = tokenizer.model_max_length
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {
k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
total_length = (total_length // max_seq_length) * max_seq_length
# Split by chunks of max_len.
result = {
k: [t[i: i + max_seq_length]
for i in range(0, total_length, max_seq_length)]
for k, t in concatenated_examples.items()
}
return result
tokenized_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=60,
load_from_cache_file=True,
)
print(tokenized_datasets)
print('finished')
``` | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1718/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1718/timeline | null | completed | null | null | false | [
"Thanks for reporting !\r\nI was able to reproduce thanks to your code and find the origin of the bug.\r\nThe cache was not reusing the same file because one object was not deterministic. It comes from a conversion from `set` to `list` in the `datasets.arrrow_dataset.transmit_format` function, where the resulting list would not always be in the same order and therefore the function that computes the hash used by the cache would not always return the same result.\r\nI'm opening a PR to fix this.\r\n\r\nAlso we plan to do a new release in the coming days so you can expect the fix to be available soon.\r\nNote that you can still specify `cache_file_name=` in the second `map()` call to name the cache file yourself if you want to.",
"Thanks for the fast reply, waiting for the fix :)\r\n\r\nI tried to use `cache_file_names` and wasn't sure how, I tried to give it the following:\r\n```\r\ntokenized_datasets = tokenized_datasets.map(\r\n group_texts,\r\n batched=True,\r\n num_proc=60,\r\n load_from_cache_file=True,\r\n cache_file_names={k: f'.cache/{str(k)}' for k in tokenized_datasets}\r\n)\r\n```\r\n\r\nand got an error:\r\n```\r\nmultiprocess.pool.RemoteTraceback:\r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"/venv/lib/python3.6/site-packages/multiprocess/pool.py\", line 119, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"/venv/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 157, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/venv/lib/python3.6/site-packages/datasets/fingerprint.py\", line 163, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"/venv/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1491, in _map_single\r\n tmp_file = tempfile.NamedTemporaryFile(\"wb\", dir=os.path.dirname(cache_file_name), delete=False)\r\n File \"/usr/lib/python3.6/tempfile.py\", line 690, in NamedTemporaryFile\r\n (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)\r\n File \"/usr/lib/python3.6/tempfile.py\", line 401, in _mkstemp_inner\r\n fd = _os.open(file, flags, 0o600)\r\nFileNotFoundError: [Errno 2] No such file or directory: '_00000_of_00060.cache/tmpsvszxtop'\r\n\"\"\"\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"test.py\", line 48, in <module>\r\n cache_file_names={k: f'.cache/{str(k)}' for k in tokenized_datasets}\r\n File \"/venv/lib/python3.6/site-packages/datasets/dataset_dict.py\", line 303, in map\r\n for k, dataset in self.items()\r\n File \"/venv/lib/python3.6/site-packages/datasets/dataset_dict.py\", line 303, in <dictcomp>\r\n for k, dataset in self.items()\r\n File \"/venv/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1317, in map\r\n transformed_shards = [r.get() for r in results]\r\n File \"/venv/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1317, in <listcomp>\r\n transformed_shards = [r.get() for r in results]\r\n File \"/venv/lib/python3.6/site-packages/multiprocess/pool.py\", line 644, in get\r\n raise self._value\r\nFileNotFoundError: [Errno 2] No such file or directory: '_00000_of_00060.cache/tmpsvszxtop'\r\n```\r\n",
"The documentation says\r\n```\r\ncache_file_names (`Optional[Dict[str, str]]`, defaults to `None`): Provide the name of a cache file to use to store the\r\n results of the computation instead of the automatically generated cache file name.\r\n You have to provide one :obj:`cache_file_name` per dataset in the dataset dictionary.\r\n```\r\nWhat is expected is simply the name of a file, not a path. The file will be located in the cache directory of the `wikitext` dataset. You can try again with something like\r\n```python\r\ncache_file_names = {k: f'tokenized_and_grouped_{str(k)}' for k in tokenized_datasets}\r\n```",
"Managed to get `cache_file_names` working and caching works well with it\r\nHad to make a small modification for it to work:\r\n```\r\ncache_file_names = {k: f'tokenized_and_grouped_{str(k)}.arrow' for k in tokenized_datasets}\r\n```",
"Another comment on `cache_file_names`, it doesn't save the produced cached files in the dataset's cache folder, it requires to give a path to an existing directory for it to work.\r\nI can confirm that this is how it works in `datasets==1.1.3`",
"Oh yes indeed ! Maybe we need to update the docstring to mention that it is a path",
"I fixed the docstring. Hopefully this is less confusing now: https://github.com/huggingface/datasets/commit/42ccc0012ba8864e6db1392430100f350236183a",
"I upgraded to the latest version and I encountered some strange behaviour, the script I posted in the OP doesn't trigger recalculation, however, if I add the following change it does trigger partial recalculation, I am not sure if its something wrong on my machine or a bug:\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoTokenizer\r\n\r\ndatasets = load_dataset('wikitext', 'wikitext-103-raw-v1')\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=True)\r\n\r\ncolumn_names = datasets[\"train\"].column_names\r\ntext_column_name = \"text\" if \"text\" in column_names else column_names[0]\r\ndef tokenize_function(examples):\r\n return tokenizer(examples[text_column_name], return_special_tokens_mask=True)\r\n# CHANGE\r\nprint('hello')\r\n# CHANGE\r\n\r\ntokenized_datasets = datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n...\r\n```\r\nI am using datasets in the `run_mlm.py` script in the transformers examples and I found that if I change the script without touching any of the preprocessing. it still triggers recalculation which is very weird\r\n\r\nEdit: accidently clicked the close issue button ",
"This is because the `group_texts` line definition changes (it is defined 3 lines later than in the previous call). Currently if a function is moved elsewhere in a script we consider it to be different.\r\n\r\nNot sure this is actually a good idea to keep this behavior though. We had this as a security in the early development of the lib but now the recursive hashing of objects is robust so we can probably remove that.\r\nMoreover we're already ignoring the line definition for lambda functions.",
"I opened a PR to change this, let me know what you think.",
"Sounds great, thank you for your quick responses and help! Looking forward for the next release.",
"I am having a similar issue where only the grouped files are loaded from cache while the tokenized ones aren't. I can confirm both datasets are being stored to file, but only the grouped version is loaded from cache. Not sure what might be going on. But I've tried to remove all kinds of non deterministic behaviour, but still no luck. Thanks for the help!\r\n\r\n\r\n```python\r\n # Datasets\r\n train = sorted(glob(args.data_dir + '*.{}'.format(args.ext)))\r\n if args.dev_split >= len(train):\r\n raise ValueError(\"Not enough dev files\")\r\n dev = []\r\n state = random.Random(1001)\r\n for _ in range(args.dev_split):\r\n dev.append(train.pop(state.randint(0, len(train) - 1)))\r\n\r\n max_seq_length = min(args.max_seq_length, tokenizer.model_max_length)\r\n\r\n def tokenize_function(examples):\r\n return tokenizer(examples['text'], return_special_tokens_mask=True)\r\n\r\n def group_texts(examples):\r\n # Concatenate all texts from our dataset and generate chunks of max_seq_length\r\n concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\r\n total_length = len(concatenated_examples[list(examples.keys())[0]])\r\n # Truncate (not implementing padding)\r\n total_length = (total_length // max_seq_length) * max_seq_length\r\n # Split by chunks of max_seq_length\r\n result = {\r\n k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]\r\n for k, t in concatenated_examples.items()\r\n }\r\n return result\r\n\r\n datasets = load_dataset(\r\n 'text', name='DBNL', data_files={'train': train[:10], 'dev': dev[:5]}, \r\n cache_dir=args.data_cache_dir)\r\n datasets = datasets.map(tokenize_function, \r\n batched=True, remove_columns=['text'], \r\n cache_file_names={k: os.path.join(args.data_cache_dir, f'{k}-tokenized') for k in datasets},\r\n load_from_cache_file=not args.overwrite_cache)\r\n datasets = datasets.map(group_texts, \r\n batched=True,\r\n cache_file_names={k: os.path.join(args.data_cache_dir, f'{k}-grouped') for k in datasets},\r\n load_from_cache_file=not args.overwrite_cache)\r\n```\r\n\r\nAnd this is the log\r\n\r\n```\r\n04/26/2021 10:26:59 - WARNING - datasets.builder - Using custom data configuration DBNL-f8d988ad33ccf2c1\r\n04/26/2021 10:26:59 - WARNING - datasets.builder - Reusing dataset text (/home/manjavacasema/data/.cache/text/DBNL-f8d988ad33ccf2c1/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5)\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 21.07ba/s]\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40/40 [00:01<00:00, 24.28ba/s]\r\n04/26/2021 10:27:01 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/manjavacasema/data/.cache/train-grouped\r\n04/26/2021 10:27:01 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/manjavacasema/data/.cache/dev-grouped\r\n```\r\n",
"Hi ! What tokenizer are you using ?",
"It's the ByteLevelBPETokenizer",
"This error happened to me too, when I tried to supply my own fingerprint to `map()` via the `new_fingerprint` arg.\r\n\r\nEdit: realized it was because my path was weird and had colons and brackets and slashes in it, since one of the variable values I included in the fingerprint was a dataset split like \"train[:10%]\". I fixed it with [this solution](https://stackoverflow.com/a/13593932/2287177) from StackOverflow to just remove those invalid characters from the fingerprint.",
"Good catch @jxmorris12, maybe we should do additional checks on the valid characters for fingerprints ! Would you like to contribute this ?\r\n\r\nI think this can be added here, when we set the fingerprint(s) that are passed `map`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/25bb7c9cbf519fbbf9abf3898083b529e7762705/src/datasets/fingerprint.py#L449-L454\r\n\r\nmaybe something like\r\n```python\r\nif kwargs.get(fingerprint_name) is None:\r\n ...\r\nelse:\r\n # In this case, it's the user who specified the fingerprint manually:\r\n # we need to make sure it's a valid hash\r\n validate_fingerprint(kwargs[fingerprint_name])\r\n```\r\n\r\nOtherwise I can open a PR later",
"I opened a PR here to add the fingerprint validation: https://github.com/huggingface/datasets/pull/4587\r\n\r\nEDIT: merged :)",
"thank you!"
] |
https://api.github.com/repos/huggingface/datasets/issues/5764 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5764/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5764/comments | https://api.github.com/repos/huggingface/datasets/issues/5764/events | https://github.com/huggingface/datasets/issues/5764 | 1,670,740,198 | I_kwDODunzps5jlXjm | 5,764 | ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1 | [] | closed | false | null | 7 | 2023-04-17T09:08:18Z | 2023-04-18T07:18:20Z | 2023-04-18T07:18:20Z | null | ### Describe the bug
I want to use this (https://huggingface.co/datasets/josianem/imdb) dataset therefore I am trying to load it using the following code:
```
dataset = load_dataset("josianem/imdb")
```
The dataset is not getting loaded and gives the error message as the following:
```
Traceback (most recent call last):
File "sample.py", line 3, in <module>
dataset = load_dataset("josianem/imdb")
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py", line 704, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/saurav/.cache/huggingface/modules/datasets_modules/datasets/imdb/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f/imdb.py", line 79, in _split_generators
archive = dl_manager.download(_DOWNLOAD_URL)
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 197, in map_nested
return function(data_struct)
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 289, in cached_path
output_path = get_from_cache(
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 606, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1
```
### Steps to reproduce the bug
You can reproduce the error by using the following code:
```
from datasets import load_dataset, load_metric
dataset = load_dataset("josianem/imdb")
```
### Expected behavior
The dataset should get loaded (I am using this dataset for the first time so not much aware of the exact behavior).
### Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 11.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5764/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5764/timeline | null | completed | null | null | false | [
"Thanks for reporting, @sauravtii.\r\n\r\nUnfortunately, I'm not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"josianem/imdb\")\r\n\r\nIn [2]: ds\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 25799\r\n })\r\n test: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 25000\r\n })\r\n unsupervised: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 50000\r\n })\r\n})\r\n```\r\n\r\nCould you please retry to load the dataset? Maybe there was a temporary connection issue to Dropbox.",
"Thanks @albertvillanova. I am facing another issue now\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"sample.py\", line 4, in <module>\r\n dataset = load_dataset(\"josianem/imdb\")\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 738, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/info_utils.py\", line 74, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=34501348, num_examples=25799, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]\r\n```\r\n\r\nThis is my code\r\n\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\")\r\n```",
"Your connection didn't work and you got an empty dataset (`num_bytes=0, num_examples=0`):\r\n```\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: \r\n[\r\n {\r\n 'expected': SplitInfo(name='train', num_bytes=34501348, num_examples=25799, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }, \r\n {\r\n 'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }, \r\n {\r\n 'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }\r\n]\r\n```\r\n\r\nCould you please try the link in your browser and see if it works? https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1\r\n- If it does not work, you should contact the author of the dataset in their Community tab (https://huggingface.co/datasets/josianem/imdb/discussions) and inform them, so that they can host their data elsewhere, for example on the Hugging Face Hub itself\r\n\r\nIf the link works, you should try to load the dataset but forcing the re-download of the data files (so that the cache is refreshed with the actual data file), by passing `download_mode=\"force_redownload\"`:\r\n```python\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n```",
"After pasting the link in the browser, it did start the download so it seems that the link is working. But even after including the `download_mode` in my code I am facing the same issue:\r\n\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"sample.py\", line 4, in <module>\r\n dataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 704, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/saurav/.cache/huggingface/modules/datasets_modules/datasets/imdb/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f/imdb.py\", line 79, in _split_generators\r\n archive = dl_manager.download(_DOWNLOAD_URL)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 196, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 197, in map_nested\r\n return function(data_struct)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 217, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 289, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 606, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1\r\n```\r\n\r\nMy code:\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n```",
"I have tried again to reproduce your issue without success: the dataset loads perfectly, both in my local machine and in a Colab notebook.\r\n- See: https://colab.research.google.com/drive/1dky3T0XGFuldggy22NNQQN-UqOFqvnuY?usp=sharing\r\n\r\nI think the cause maight be that you are using a very old version of `datasets`. Please, could you update it and retry?\r\n```\r\npip install -U datasets\r\n```",
"That worked!! Thanks @albertvillanova : )\r\n\r\n```\r\nDownloading builder script: 100%|███████| 4.20k/4.20k [00:00<00:00, 6.69MB/s]\r\nDownloading metadata: 100%|█████████████| 2.60k/2.60k [00:00<00:00, 3.41MB/s]\r\nDownloading readme: 100%|███████████████| 7.52k/7.52k [00:00<00:00, 12.6MB/s]\r\nDownloading and preparing dataset imdb/plain_text to /home/saurav/.cache/huggingface/datasets/josianem___imdb/plain_text/1.0.0/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f...\r\nDownloading data: 100%|███████████████████| 301M/301M [01:32<00:00, 3.25MB/s]\r\nDataset imdb downloaded and prepared to /home/saurav/.cache/huggingface/datasets/josianem___imdb/plain_text/1.0.0/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f. Subsequent calls will reuse this data.\r\n100%|█████████████████████████████████████████| 3/3 [00:00<00:00, 794.83it/s]\r\n```\r\n\r\nThe code I used:\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n\r\n```\r\n\r\nBut when I remove `download_mode=\"force_redownload\"` I get the same error. Any guess on that?",
"That is because the cache got the \"empty\" download file the first time you tried and got the connection error.\r\n\r\nThen, once you no longer get the connection error, you need to refresh the cache by passing `download_mode=\"force_redownload\"`."
] |
https://api.github.com/repos/huggingface/datasets/issues/4216 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4216/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4216/comments | https://api.github.com/repos/huggingface/datasets/issues/4216/events | https://github.com/huggingface/datasets/pull/4216 | 1,214,614,029 | PR_kwDODunzps42u1_w | 4,216 | Avoid recursion error in map if example is returned as dict value | [] | closed | false | null | 1 | 2022-04-25T14:40:32Z | 2022-05-04T17:20:06Z | 2022-05-04T17:12:52Z | null | I noticed this bug while answering [this question](https://discuss.huggingface.co/t/correct-way-to-create-a-dataset-from-a-csv-file/15686/11?u=mariosasko).
This code replicates the bug:
```python
from datasets import Dataset
dset = Dataset.from_dict({"en": ["aa", "bb"], "fr": ["cc", "dd"]})
dset.map(lambda ex: {"translation": ex})
```
and this is the fix for it (before this PR):
```python
from datasets import Dataset
dset = Dataset.from_dict({"en": ["aa", "bb"], "fr": ["cc", "dd"]})
dset.map(lambda ex: {"translation": dict(ex)})
```
Internally, this can be fixed by merging two dicts via dict unpacking (instead of `dict.update) `in `Dataset.map`, which avoids creating recursive dictionaries.
P.S. `{**a, **b}` is slightly more performant than `a.update(b)` in my bencmarks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4216/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4216/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4216.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4216",
"merged_at": "2022-05-04T17:12:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4216.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4216"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/372 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/372/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/372/comments | https://api.github.com/repos/huggingface/datasets/issues/372/events | https://github.com/huggingface/datasets/pull/372 | 654,774,420 | MDExOlB1bGxSZXF1ZXN0NDQ3NDMzNTA4 | 372 | Make the json script more flexible | [] | closed | false | null | 0 | 2020-07-10T13:15:15Z | 2020-07-10T14:52:07Z | 2020-07-10T14:52:06Z | null | Fix https://github.com/huggingface/nlp/issues/359
Fix https://github.com/huggingface/nlp/issues/369
JSON script now can accept JSON files containing a single dict with the records as a list in one attribute to the dict (previously it only accepted JSON files containing records as rows of dicts in the file).
In this case, you should indicate using `field=XXX` the name of the field in the JSON structure which contains the records you want to load. The records can be a dict of lists or a list of dicts.
E.g. to load the SQuAD dataset JSON (without using the `squad` specific dataset loading script), in which the data rows are in the `data` field of the JSON dict, you can do:
```python
from nlp import load_dataset
dataset = load_dataset('json', data_files='/PATH/TO/JSON', field='data')
``` | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/372/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/372/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/372.diff",
"html_url": "https://github.com/huggingface/datasets/pull/372",
"merged_at": "2020-07-10T14:52:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/372.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/372"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5398 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5398/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5398/comments | https://api.github.com/repos/huggingface/datasets/issues/5398/events | https://github.com/huggingface/datasets/issues/5398 | 1,514,425,231 | I_kwDODunzps5aREuP | 5,398 | Unpin pydantic | [] | closed | false | null | 0 | 2022-12-30T10:37:31Z | 2022-12-30T10:43:41Z | 2022-12-30T10:43:41Z | null | Once `pydantic` fixes their issue in their 1.10.3 version, unpin it.
See issue:
- #5394
See temporary fix:
- #5395 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5398/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5398/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3133/comments | https://api.github.com/repos/huggingface/datasets/issues/3133/events | https://github.com/huggingface/datasets/pull/3133 | 1,032,511,710 | PR_kwDODunzps4tftyZ | 3,133 | Support Audio feature in streaming mode | [] | closed | false | null | 0 | 2021-10-21T13:37:57Z | 2021-11-12T14:13:05Z | 2021-11-12T14:13:04Z | null | Fix #3132. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3133/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3133/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3133.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3133",
"merged_at": "2021-11-12T14:13:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3133.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3133"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4196/comments | https://api.github.com/repos/huggingface/datasets/issues/4196/events | https://github.com/huggingface/datasets/issues/4196 | 1,211,271,261 | I_kwDODunzps5IMohd | 4,196 | Embed image and audio files in `save_to_disk` | [] | closed | false | null | 0 | 2022-04-21T16:25:18Z | 2022-12-14T18:22:59Z | 2022-12-14T18:22:59Z | null | Following https://github.com/huggingface/datasets/pull/4184, currently a dataset saved using `save_to_disk` doesn't actually contain the bytes of the image or audio files. Instead it stores the path to your local files.
Adding `embed_external_files` and set it to True by default to save_to_disk would be kind of a breaking change since some users will get bigger Arrow files when updating the lib, but the advantages are nice:
- the resulting dataset is self contained, in case you want to delete your cache for example or share it with someone else
- users also upload these Arrow files to cloud storage via the fs parameter, and in this case they would expect to upload a self-contained dataset
- consistency with push_to_hub
This can be implemented at the same time as sharding for `save_to_disk` for efficiency, and reuse the helpers from `push_to_hub` to embed the external files.
cc @mariosasko | {
"+1": 6,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4196/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4196/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2935/comments | https://api.github.com/repos/huggingface/datasets/issues/2935/events | https://github.com/huggingface/datasets/pull/2935 | 999,518,469 | PR_kwDODunzps4r5j8B | 2,935 | Add Jigsaw unintended Bias | [] | closed | false | null | 3 | 2021-09-17T16:12:31Z | 2021-09-24T10:41:52Z | 2021-09-24T10:41:52Z | null | Hi,
Here's a first attempt at this dataset. Would be great if it could be merged relatively quickly as it is needed for Bigscience-related stuff.
This requires manual download, and I had some trouble generating dummy_data in this setting, so welcoming feedback there. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2935/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2935/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2935.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2935",
"merged_at": "2021-09-24T10:41:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2935.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2935"
} | true | [
"Note that the tests seem to fail because of a bug in an Exception at the moment, see: https://github.com/huggingface/datasets/pull/2936 for the fix",
"@lhoestq implemented your changes, I think this might be ready for another look.",
"Thanks @lhoestq, implemented the changes, let me know if anything else pops up."
] |
https://api.github.com/repos/huggingface/datasets/issues/5679 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5679/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5679/comments | https://api.github.com/repos/huggingface/datasets/issues/5679/events | https://github.com/huggingface/datasets/issues/5679 | 1,645,184,622 | I_kwDODunzps5iD4Zu | 5,679 | Allow load_dataset to take a working dir for intermediate data | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 4 | 2023-03-29T07:21:09Z | 2023-04-12T22:30:25Z | null | null | ### Feature request
As a user, I can set a working dir for intermediate data creation. The processed files will be moved to the cache dir, like
```
load_dataset(…, working_dir=”/temp/dir”, cache_dir=”/cloud_dir”).
```
### Motivation
This will help the use case for using datasets with cloud storage as cache. It will help boost the performance.
### Your contribution
I can provide a PR to fix this if the proposal seems reasonable. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5679/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5679/timeline | null | null | null | null | false | [
"Hi ! AFAIK a dataset must be present on a local disk to be able to efficiently memory map the datasets Arrow files. What makes you think that it is possible to load from a cloud storage and have good performance ?\r\n\r\nAnyway it's already possible to download_and_prepare a dataset as Arrow files in a cloud storage with:\r\n```python\r\nbuilder = load_dataset_builder(..., cache_dir=\"/temp/dir\")\r\nbuilder.download_and_prepare(\"/cloud_dir\")\r\n```\r\n\r\nbut then \r\n```python\r\nds = builder.as_dataset()\r\n```\r\nwould fail if \"/cloud_dir\" is not a local directory.",
"In my use case, I am trying to mount the S3 bucket as local system with S3FS-FUSE / [goofys](https://github.com/kahing/goofys). I want to use S3 to save the download data and save checkpoint for training for persistent. Setting the s3 location as cache directory is not fast enough. That is why I want to set a work directory for temp data for memory map and only save the final result to s3 cache. ",
"You can try setting `HF_DATASETS_DOWNLOADED_DATASETS_PATH` and `HF_DATASETS_EXTRACTED_DATASETS_PATH` to S3, and `HF_DATASETS_CACHE` to your local disk.\r\n\r\nThis way all your downloaded and extracted data are on your mounted S3, but the datasets Arrow files are on your local disk",
"If we hope to also persist the Arrow files on the mounted S3 but work with the efficiency of local disk, is there any recommended way to do this, other than copying the Arrow files from local disk to S3?"
] |
https://api.github.com/repos/huggingface/datasets/issues/1547 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1547/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1547/comments | https://api.github.com/repos/huggingface/datasets/issues/1547/events | https://github.com/huggingface/datasets/pull/1547 | 765,562,792 | MDExOlB1bGxSZXF1ZXN0NTM4OTkwOTMy | 1,547 | Adding PolEval2019 Machine Translation Task dataset | [] | closed | false | null | 6 | 2020-12-13T17:50:03Z | 2023-04-03T09:20:23Z | 2020-12-21T16:13:21Z | null | Facing an error with pytest in training. Dummy data is passing.
README has to be updated. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1547/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1547/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1547.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1547",
"merged_at": "2020-12-21T16:13:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1547.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1547"
} | true | [
"**NOTE:**\r\n\r\n- Train and Dev: Manually downloaded (auto download is repeatedly giving `ConnectionError` for one of the files), Test: Auto Download\r\n- Dummy test is passing\r\n- The json file has been created with hard-coded paths for the manual downloads _(hardcoding has been removed from the final uploaded script)_\r\n- datasets-cli is still **failing** . It is not picking the right directory for the config. For instance, my folder structure is as below:\r\n ```\r\n ~/Downloads/Data/\r\n |--- English-to-Polish\r\n |--- (corresponding files) \r\n |--- Russian-Polish\r\n |--- (corresponding files) \r\n```\r\n\r\nWhen ru-pl is selected, ideally it has to search in Russian-Polish folder, but it is searching in '/Downloads/Data/' folder and hence getting a FileNotFound error.\r\n\r\nThe command run is \r\n`python datasets-cli test datasets/poleval2019_mt/ --save_infos --all_configs --data_dir ~/Downloads/Data/\r\n`\r\n",
"Hi !\r\nThanks for the changes :)\r\n\r\nThe only error left is the dummy data. Since we changed for standard downloads instead of manual downloads its structure changed. Fortunately you can auto-generate the dummy data with this command:\r\n\r\n```\r\ndatasets-cli dummy_data ./datasets/poleval2019_mt --auto_generate --match_text_files \"*\"\r\n```\r\n\r\nCan you regenerate the dummy data using this command please ?",
"Thank you for the help @lhoestq !! I was generating the dummy dataset in a wrong way! That _--match_text_files \"*\"_ did the trick! Now all the tests have passed! :-)",
"Hi @vrindaprabhu ! Do you still have the Poleval2019 data files somewhere by any chance ? It appears the google drive URLs are not working anymore",
"Hi @lhoestq. Just checked. I do not have the backup of the data anywhere. It also appears that PolEval does not repeat its tasks, the data seem to have gone forever. Do you think I should try contacting the organizers for more info?",
"We tried already and they don't have the data anymore :("
] |
https://api.github.com/repos/huggingface/datasets/issues/2919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2919/comments | https://api.github.com/repos/huggingface/datasets/issues/2919/events | https://github.com/huggingface/datasets/issues/2919 | 997,127,487 | I_kwDODunzps47bvU_ | 2,919 | Unwanted progress bars when accessing examples | [] | closed | false | null | 1 | 2021-09-15T14:05:10Z | 2021-09-15T17:21:49Z | 2021-09-15T17:18:23Z | null | When accessing examples from a dataset formatted for pytorch, some progress bars appear when accessing examples:
```python
In [1]: import datasets as ds
In [2]: d = ds.Dataset.from_dict({"a": [0, 1, 2]}).with_format("torch")
In [3]: d[0]
100%|████████████████████████████████| 1/1 [00:00<00:00, 3172.70it/s]
Out[3]: {'a': tensor(0)}
```
This is because the pytorch formatter calls `map_nested` that uses progress bars
cc @sgugger | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2919/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2919/timeline | null | completed | null | null | false | [
"doing a patch release now :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3366/comments | https://api.github.com/repos/huggingface/datasets/issues/3366/events | https://github.com/huggingface/datasets/issues/3366 | 1,069,214,022 | I_kwDODunzps4_uulG | 3,366 | Add multimodal datasets | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 0 | 2021-12-02T07:24:04Z | 2023-02-28T16:29:22Z | null | null | Epic issue to track the addition of multimodal datasets:
- [ ] #2526
- [x] #1842
- [ ] #1810
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
@VictorSanh feel free to add and sort by priority any interesting dataset. I have added the multimodal dataset requests which were already present as issues. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3366/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3366/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4444 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4444/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4444/comments | https://api.github.com/repos/huggingface/datasets/issues/4444/events | https://github.com/huggingface/datasets/pull/4444 | 1,259,738,209 | PR_kwDODunzps45D2XX | 4,444 | Fix kwargs in docstrings | [] | closed | false | null | 1 | 2022-06-03T10:29:02Z | 2022-06-03T11:01:28Z | 2022-06-03T10:52:46Z | null | To fix the rendering of `**kwargs` in docstrings, a parentheses must be added afterwards.
See:
- huggingface/doc-builder/issues/235 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4444/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4444/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4444.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4444",
"merged_at": "2022-06-03T10:52:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4444.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4444"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2683 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2683/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2683/comments | https://api.github.com/repos/huggingface/datasets/issues/2683/events | https://github.com/huggingface/datasets/issues/2683 | 948,721,379 | MDU6SXNzdWU5NDg3MjEzNzk= | 2,683 | Cache directories changed due to recent changes in how config kwargs are handled | [] | closed | false | null | 0 | 2021-07-20T14:37:57Z | 2021-07-20T16:27:15Z | 2021-07-20T16:27:15Z | null | Since #2659 I can see weird cache directory names with hashes in the config id, even though no additional config kwargs are passed. For example:
```python
from datasets import load_dataset_builder
c4_builder = load_dataset_builder("c4", "en")
print(c4_builder.cache_dir)
# /Users/quentinlhoest/.cache/huggingface/datasets/c4/en-174d3b7155eb68db/0.0.0/...
# instead of
# /Users/quentinlhoest/.cache/huggingface/datasets/c4/en/0.0.0/...
```
This issue could be annoying since it would simply ignore old cache directories for users, and regenerate datasets
cc @stas00 this is what you experienced a few days ago
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2683/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2683/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2101/comments | https://api.github.com/repos/huggingface/datasets/issues/2101/events | https://github.com/huggingface/datasets/pull/2101 | 838,586,184 | MDExOlB1bGxSZXF1ZXN0NTk4NzQzMDM4 | 2,101 | MIAM dataset - new citation details | [] | closed | false | null | 2 | 2021-03-23T10:41:23Z | 2021-03-23T18:08:10Z | 2021-03-23T18:08:10Z | null | Hi @lhoestq, I have updated the citations to reference an OpenReview preprint. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2101/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2101/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2101.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2101",
"merged_at": "2021-03-23T18:08:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2101.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2101"
} | true | [
"Hi !\r\nLooks like there's a unicode error in the new citation in the miam.py file.\r\nCould you try to fix it ? Not sure from which character it comes from though\r\n\r\nYou can test if it works on your side with\r\n```\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_miam\r\n```",
"Unicode error resolved!"
] |
https://api.github.com/repos/huggingface/datasets/issues/694 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/694/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/694/comments | https://api.github.com/repos/huggingface/datasets/issues/694/events | https://github.com/huggingface/datasets/pull/694 | 712,827,751 | MDExOlB1bGxSZXF1ZXN0NDk2MjQ1NzU0 | 694 | Use GitHub instead of aws in remote dataset tests | [] | closed | false | null | 0 | 2020-10-01T13:07:50Z | 2020-10-02T07:47:28Z | 2020-10-02T07:47:27Z | null | Recently we switched from aws s3 to github to download dataset scripts.
However in the tests, the dummy data were still downloaded from s3.
So I changed that to download them from github instead, in the MockDownloadManager.
Moreover I noticed that `anli`'s dummy data were quite heavy (18MB compressed, i.e. the entire dataset) so I replaced them with dummy data with few examples. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/694/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/694/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/694.diff",
"html_url": "https://github.com/huggingface/datasets/pull/694",
"merged_at": "2020-10-02T07:47:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/694.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/694"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5470/comments | https://api.github.com/repos/huggingface/datasets/issues/5470/events | https://github.com/huggingface/datasets/pull/5470 | 1,558,542,611 | PR_kwDODunzps5InLw9 | 5,470 | Update dataset card creation | [] | closed | false | null | 4 | 2023-01-26T17:57:51Z | 2023-01-27T16:27:00Z | 2023-01-27T16:20:10Z | null | Encourages users to create a dataset card on the Hub directly with the new metadata ui + import dataset card template instead of telling users to manually create and upload one. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5470/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5470/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5470.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5470",
"merged_at": "2023-01-27T16:20:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5470.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5470"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI failure is unrelated to your PR - feel free to merge :)",
"Haha thanks, you read my mind :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008332 / 0.011353 (-0.003021) | 0.004556 / 0.011008 (-0.006452) | 0.102239 / 0.038508 (0.063731) | 0.029332 / 0.023109 (0.006222) | 0.296189 / 0.275898 (0.020291) | 0.355746 / 0.323480 (0.032266) | 0.007705 / 0.007986 (-0.000281) | 0.003488 / 0.004328 (-0.000840) | 0.079142 / 0.004250 (0.074891) | 0.034980 / 0.037052 (-0.002073) | 0.307460 / 0.258489 (0.048971) | 0.345944 / 0.293841 (0.052103) | 0.033815 / 0.128546 (-0.094731) | 0.011603 / 0.075646 (-0.064044) | 0.322097 / 0.419271 (-0.097175) | 0.043753 / 0.043533 (0.000220) | 0.296706 / 0.255139 (0.041567) | 0.323195 / 0.283200 (0.039996) | 0.092295 / 0.141683 (-0.049388) | 1.542556 / 1.452155 (0.090401) | 1.571896 / 1.492716 (0.079180) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191075 / 0.018006 (0.173069) | 0.407394 / 0.000490 (0.406905) | 0.002033 / 0.000200 (0.001833) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023175 / 0.037411 (-0.014236) | 0.094774 / 0.014526 (0.080248) | 0.105782 / 0.176557 (-0.070775) | 0.146608 / 0.737135 (-0.590528) | 0.107519 / 0.296338 (-0.188819) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421516 / 0.215209 (0.206306) | 4.201091 / 2.077655 (2.123436) | 1.880285 / 1.504120 (0.376165) | 1.676333 / 1.541195 (0.135139) | 1.734301 / 1.468490 (0.265811) | 0.688504 / 4.584777 (-3.896273) | 3.370289 / 3.745712 (-0.375423) | 3.127661 / 5.269862 (-2.142201) | 1.562570 / 4.565676 (-3.003106) | 0.081687 / 0.424275 (-0.342588) | 0.012334 / 0.007607 (0.004727) | 0.524125 / 0.226044 (0.298080) | 5.245595 / 2.268929 (2.976667) | 2.332622 / 55.444624 (-53.112002) | 1.973212 / 6.876477 (-4.903265) | 2.006507 / 2.142072 (-0.135565) | 0.807126 / 4.805227 (-3.998101) | 0.148254 / 6.500664 (-6.352411) | 0.064240 / 0.075469 (-0.011229) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206880 / 1.841788 (-0.634907) | 13.854877 / 8.074308 (5.780569) | 13.806772 / 10.191392 (3.615380) | 0.144380 / 0.680424 (-0.536044) | 0.028492 / 0.534201 (-0.505709) | 0.393854 / 0.579283 (-0.185429) | 0.402210 / 0.434364 (-0.032154) | 0.462138 / 0.540337 (-0.078199) | 0.537480 / 1.386936 (-0.849456) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006692 / 0.011353 (-0.004661) | 0.004529 / 0.011008 (-0.006479) | 0.077925 / 0.038508 (0.039417) | 0.027824 / 0.023109 (0.004715) | 0.342288 / 0.275898 (0.066390) | 0.375071 / 0.323480 (0.051591) | 0.004889 / 0.007986 (-0.003097) | 0.003353 / 0.004328 (-0.000975) | 0.076198 / 0.004250 (0.071947) | 0.037797 / 0.037052 (0.000744) | 0.347834 / 0.258489 (0.089345) | 0.384200 / 0.293841 (0.090359) | 0.032184 / 0.128546 (-0.096362) | 0.011674 / 0.075646 (-0.063972) | 0.086242 / 0.419271 (-0.333029) | 0.044465 / 0.043533 (0.000932) | 0.341712 / 0.255139 (0.086573) | 0.366908 / 0.283200 (0.083709) | 0.091526 / 0.141683 (-0.050156) | 1.495798 / 1.452155 (0.043643) | 1.571700 / 1.492716 (0.078984) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221962 / 0.018006 (0.203955) | 0.393095 / 0.000490 (0.392605) | 0.000385 / 0.000200 (0.000185) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024365 / 0.037411 (-0.013046) | 0.099278 / 0.014526 (0.084753) | 0.105940 / 0.176557 (-0.070617) | 0.141334 / 0.737135 (-0.595802) | 0.110898 / 0.296338 (-0.185440) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446150 / 0.215209 (0.230941) | 4.471441 / 2.077655 (2.393786) | 2.124864 / 1.504120 (0.620744) | 1.909950 / 1.541195 (0.368755) | 1.970085 / 1.468490 (0.501595) | 0.706711 / 4.584777 (-3.878066) | 3.380336 / 3.745712 (-0.365376) | 1.866106 / 5.269862 (-3.403756) | 1.160657 / 4.565676 (-3.405019) | 0.082786 / 0.424275 (-0.341489) | 0.012470 / 0.007607 (0.004862) | 0.537620 / 0.226044 (0.311575) | 5.390588 / 2.268929 (3.121659) | 2.539137 / 55.444624 (-52.905488) | 2.191867 / 6.876477 (-4.684610) | 2.236212 / 2.142072 (0.094139) | 0.810756 / 4.805227 (-3.994471) | 0.150933 / 6.500664 (-6.349731) | 0.066141 / 0.075469 (-0.009328) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.271595 / 1.841788 (-0.570193) | 13.840013 / 8.074308 (5.765705) | 13.334443 / 10.191392 (3.143051) | 0.150096 / 0.680424 (-0.530328) | 0.016919 / 0.534201 (-0.517282) | 0.375534 / 0.579283 (-0.203749) | 0.387203 / 0.434364 (-0.047161) | 0.463500 / 0.540337 (-0.076838) | 0.553496 / 1.386936 (-0.833440) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/360 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/360/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/360/comments | https://api.github.com/repos/huggingface/datasets/issues/360/events | https://github.com/huggingface/datasets/issues/360 | 653,687,176 | MDU6SXNzdWU2NTM2ODcxNzY= | 360 | [Feature request] Add dataset.ragged_map() function for many-to-many transformations | [] | closed | false | null | 2 | 2020-07-09T01:04:43Z | 2020-07-09T19:31:51Z | 2020-07-09T19:31:51Z | null | `dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines.
`dataset.filter()` enables one-to-(one-or-none) transformations. Input one example and output either zero/one example. This is helpful for removing portions from the dataset.
However, some dataset transformations are many-to-many. Consider constructing BERT training examples from a dataset of sentences, where you map `["a", "b", "c"] -> ["a[SEP]b", "a[SEP]c", "b[SEP]c", "c[SEP]b", ...]`
I propose a more general `ragged_map()` method that takes in a batch of examples of length `N` and return a batch of examples `M`. This is different from the `map(batched=True)` method, which takes examples of length `N` and returns a batch of length `N`, processing individual examples in parallel. I don't have a clear vision of how this would be implemented efficiently and lazily, but would love to hear the community's feedback on this.
My specific use case is creating an end-to-end ELECTRA data pipeline. I would like to take the raw WikiText data and generate training examples from this using the `ragged_map()` method, then export to TFRecords and train quickly. This would be a reproducible pipeline with no bash scripts. Currently I'm relying on scripts like https://github.com/google-research/electra/blob/master/build_pretraining_dataset.py, which are less general.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/360/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/360/timeline | null | completed | null | null | false | [
"Actually `map(batched=True)` can already change the size of the dataset.\r\nIt can accept examples of length `N` and returns a batch of length `M` (can be null or greater than `N`).\r\n\r\nI'll make that explicit in the doc that I'm currently writing.",
"You're two steps ahead of me :) In my testing, it also works if `M` < `N`.\r\n\r\nA batched map of different length seems to work if you directly overwrite all of the original keys, but fails if any of the original keys are preserved.\r\n\r\nFor example,\r\n```python\r\n# Create a dummy dataset\r\ndset = load_dataset(\"wikitext\", \"wikitext-2-raw-v1\")[\"test\"]\r\ndset = dset.map(lambda ex: {\"length\": len(ex[\"text\"]), \"foo\": 1})\r\n\r\n# Do an allreduce on each batch, overwriting both keys\r\ndset.map(lambda batch: {\"length\": [sum(batch[\"length\"])], \"foo\": [1]})\r\n# Dataset(schema: {'length': 'int64', 'foo': 'int64'}, num_rows: 5)\r\n\r\n# Now attempt an allreduce without touching the `foo` key\r\ndset.map(lambda batch: {\"length\": [sum(batch[\"length\"])]})\r\n# This fails with the error message below\r\n```\r\n\r\n```bash\r\n File \"/path/to/nlp/src/nlp/arrow_dataset.py\", line 728, in map\r\n arrow_schema = pa.Table.from_pydict(test_output).schema\r\n File \"pyarrow/io.pxi\", line 1532, in pyarrow.lib.Codec.detect\r\n File \"pyarrow/table.pxi\", line 1503, in pyarrow.lib.Table.from_arrays\r\n File \"pyarrow/public-api.pxi\", line 390, in pyarrow.lib.pyarrow_wrap_table\r\n File \"pyarrow/error.pxi\", line 85, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Column 1 named foo expected length 1 but got length 2\r\n```\r\n\r\nAdding the `remove_columns=[\"length\", \"foo\"]` argument to `map()` solves the issue. Leaving the above error for future visitors. Perfect, thank you!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2088/comments | https://api.github.com/repos/huggingface/datasets/issues/2088/events | https://github.com/huggingface/datasets/pull/2088 | 836,763,733 | MDExOlB1bGxSZXF1ZXN0NTk3MjQ4Mzk1 | 2,088 | change bibtex template to author instead of authors | [] | closed | false | null | 1 | 2021-03-20T09:23:44Z | 2021-03-23T15:40:12Z | 2021-03-23T15:40:12Z | null | Hi,
IMO when using BibTex Author should be used instead of Authors.
See here: http://www.bibtex.org/Using/de/
Thanks
Philip | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2088/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2088/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2088.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2088",
"merged_at": "2021-03-23T15:40:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2088.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2088"
} | true | [
"Trailing whitespace was removed. So more changes in diff than just this fix."
] |
https://api.github.com/repos/huggingface/datasets/issues/128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/128/comments | https://api.github.com/repos/huggingface/datasets/issues/128/events | https://github.com/huggingface/datasets/issues/128 | 618,951,117 | MDU6SXNzdWU2MTg5NTExMTc= | 128 | Some error inside nlp.load_dataset() | [] | closed | false | null | 2 | 2020-05-15T13:01:29Z | 2020-05-15T13:10:40Z | 2020-05-15T13:10:40Z | null | First of all, nice work!
I am going through [this overview notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb)
In simple step `dataset = nlp.load_dataset('squad', split='validation[:10%]')`
I get an error, which is connected with some inner code, I think:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-d848d3a99b8c> in <module>()
1 # Downloading and loading a dataset
2
----> 3 dataset = nlp.load_dataset('squad', split='validation[:10%]')
8 frames
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
515 download_mode=download_mode,
516 ignore_verifications=ignore_verifications,
--> 517 save_infos=save_infos,
518 )
519
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)
361 verify_infos = not save_infos and not ignore_verifications
362 self._download_and_prepare(
--> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
364 )
365 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
414 try:
415 # Prepare split will record examples associated to the split
--> 416 self._prepare_split(split_generator, **prepare_split_kwargs)
417 except OSError:
418 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or ""))
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator)
585 fname = "{}-{}.arrow".format(self.name, split_generator.name)
586 fpath = os.path.join(self._cache_dir, fname)
--> 587 examples_type = self.info.features.type
588 writer = ArrowWriter(data_type=examples_type, path=fpath, writer_batch_size=self._writer_batch_size)
589
/usr/local/lib/python3.6/dist-packages/nlp/features.py in type(self)
460 @property
461 def type(self):
--> 462 return get_nested_type(self)
463
464 @classmethod
/usr/local/lib/python3.6/dist-packages/nlp/features.py in get_nested_type(schema)
370 # Nested structures: we allow dict, list/tuples, sequences
371 if isinstance(schema, dict):
--> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()})
373 elif isinstance(schema, (list, tuple)):
374 assert len(schema) == 1, "We defining list feature, you should just provide one example of the inner type"
/usr/local/lib/python3.6/dist-packages/nlp/features.py in <dictcomp>(.0)
370 # Nested structures: we allow dict, list/tuples, sequences
371 if isinstance(schema, dict):
--> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()})
373 elif isinstance(schema, (list, tuple)):
374 assert len(schema) == 1, "We defining list feature, you should just provide one example of the inner type"
/usr/local/lib/python3.6/dist-packages/nlp/features.py in get_nested_type(schema)
379 # We allow to reverse list of dict => dict of list for compatiblity with tfds
380 if isinstance(inner_type, pa.StructType):
--> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type))
382 return pa.list_(inner_type, schema.length)
383
/usr/local/lib/python3.6/dist-packages/nlp/features.py in <genexpr>(.0)
379 # We allow to reverse list of dict => dict of list for compatiblity with tfds
380 if isinstance(inner_type, pa.StructType):
--> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type))
382 return pa.list_(inner_type, schema.length)
383
TypeError: list_() takes exactly one argument (2 given)
``` | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/128/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/128/timeline | null | completed | null | null | false | [
"Google colab has an old version of Apache Arrow built-in.\r\nBe sure you execute the \"pip install\" cell and restart the notebook environment if the colab asks for it.",
"Thanks for reply, worked fine!\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3973 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3973/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3973/comments | https://api.github.com/repos/huggingface/datasets/issues/3973/events | https://github.com/huggingface/datasets/issues/3973 | 1,174,455,431 | I_kwDODunzps5GAMSH | 3,973 | ConnectionError and SSLError | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 6 | 2022-03-20T06:45:37Z | 2022-03-30T08:13:32Z | 2022-03-30T08:13:32Z | null | code
```
from datasets import load_dataset
dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
```
bug report
```
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_29788/2615425180.py in <module>
----> 1 dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1658
1659 # Create a dataset builder
-> 1660 builder_instance = load_dataset_builder(
1661 path=path,
1662 name=name,
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1484 download_config = download_config.copy() if download_config else DownloadConfig()
1485 download_config.use_auth_token = use_auth_token
-> 1486 dataset_module = dataset_module_factory(
1487 path,
1488 revision=revision,
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1236 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1237 ) from None
-> 1238 raise e1 from None
1239 else:
1240 raise FileNotFoundError(
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1173 if path.count("/") == 0: # even though the dataset is on the Hub, we get it from GitHub for now
1174 # TODO(QL): use a Hub dataset module factory instead of GitHub
-> 1175 return GithubDatasetModuleFactory(
1176 path,
1177 revision=revision,
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in get_module(self)
531 revision = self.revision
532 try:
--> 533 local_path = self.download_loading_script(revision)
534 except FileNotFoundError:
535 if revision is not None or os.getenv("HF_SCRIPTS_VERSION", None) is not None:
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in download_loading_script(self, revision)
511 if download_config.download_desc is None:
512 download_config.download_desc = "Downloading builder script"
--> 513 return cached_path(file_path, download_config=download_config)
514
515 def download_dataset_infos_file(self, revision: Optional[str]) -> str:
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
232 if is_remote_url(url_or_filename):
233 # URL, so get it from the cache (downloading if necessary)
--> 234 output_path = get_from_cache(
235 url_or_filename,
236 cache_dir=cache_dir,
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\utils\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)
580 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
581 if head_error is not None:
--> 582 raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
583 elif response is not None:
584 raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})")
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/oscar/oscar.py (SSLError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.0.0/datasets/oscar/oscar.py (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))")))
```
It may be caused by Caused by SSLError(in China?) because it works well on google colab.
So how can I download this dataset manually?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3973/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3973/timeline | null | completed | null | null | false | [
"Hi ! You can download the `oscar.py` file from this repository at `/datasets/oscar/oscar.py`.\r\n\r\nThen you can load the dataset by passing the local path to `oscar.py` to `load_dataset`:\r\n```python\r\nload_dataset(\"path/to/oscar.py\", \"unshuffled_deduplicated_it\")\r\n```",
"it works,but another error occurs.\r\n```\r\nConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt (SSLError(MaxRetryError(\"HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))\")))\r\n```\r\nI can access `https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt` and `https://aws.amazon.com/cn/s3/` directly, so why it reports a SSLError, should I need tomodify the host file?",
"Could it be an issue with your python environment or your version of OpenSSL ?",
"you are so wise!\r\nit report [ConnectionError] in python 3.9.7\r\nand works well in python 3.8.12\r\n\r\nI need you help again: how can I specify the path for download files?\r\nthe data is too large and my C hardware is not enough",
"Cool ! And you can specify the path for download files with to the `cache_dir` parameter:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('oscar', 'unshuffled_deduplicated_it', cache_dir='path/to/directory')",
"It takes me some days to download data completely, Despise sometimes it occurs again, change py version is feasible way to avoid this ConnectionEror.\r\nparameter `cache_dir` works well, thanks for your kindness again!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2870 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2870/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2870/comments | https://api.github.com/repos/huggingface/datasets/issues/2870/events | https://github.com/huggingface/datasets/pull/2870 | 988,276,859 | MDExOlB1bGxSZXF1ZXN0NzI3MjI4Njk5 | 2,870 | Fix three typos in two files for documentation | [] | closed | false | null | 0 | 2021-09-04T11:49:43Z | 2021-09-06T08:21:21Z | 2021-09-06T08:19:35Z | null | Changed "bacth_size" to "batch_size" (2x)
Changed "intsructions" to "instructions" | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2870/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2870/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2870.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2870",
"merged_at": "2021-09-06T08:19:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2870.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2870"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/989/comments | https://api.github.com/repos/huggingface/datasets/issues/989/events | https://github.com/huggingface/datasets/pull/989 | 755,079,394 | MDExOlB1bGxSZXF1ZXN0NTMwODYwNDMw | 989 | Fix SV -> NO | [] | closed | false | null | 0 | 2020-12-02T08:59:59Z | 2020-12-02T09:18:21Z | 2020-12-02T09:18:14Z | null | This PR fixes the small typo as seen in #956 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/989/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/989/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/989.diff",
"html_url": "https://github.com/huggingface/datasets/pull/989",
"merged_at": "2020-12-02T09:18:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/989.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/989"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5004/comments | https://api.github.com/repos/huggingface/datasets/issues/5004/events | https://github.com/huggingface/datasets/pull/5004 | 1,380,860,606 | PR_kwDODunzps4_WQck | 5,004 | Remove license tag file and validation | [] | closed | false | null | 1 | 2022-09-21T12:35:14Z | 2022-09-22T11:47:41Z | 2022-09-22T11:45:46Z | null | As requested, we are removing the validation of the licenses from `datasets` because this is done on the Hub.
Fix #4994.
Related to:
- #4926, which is removing all the validation from `datasets` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5004/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5004/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5004.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5004",
"merged_at": "2022-09-22T11:45:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5004.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5004"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4058/comments | https://api.github.com/repos/huggingface/datasets/issues/4058/events | https://github.com/huggingface/datasets/pull/4058 | 1,185,611,600 | PR_kwDODunzps41RPhl | 4,058 | Updated annotations for nli_tr dataset | [] | closed | false | null | 2 | 2022-03-29T23:46:59Z | 2022-04-12T20:55:12Z | 2022-04-12T10:37:22Z | null | This PR adds annotation tags for `nli_tr` dataset so that the dataset can be searchable wrt. relevant query parameters.
The annotations in this PR are based on the existing annotations of `snli` and `multi_nli` datasets as `nli_tr` is a machine-generated extension of those datasets.
This PR is intended only for updating the annotation labels but a followup PR will focus on updating the missing sections in the `README.md` as well.
Thanks for all your time to review it. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4058/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4058/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4058.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4058",
"merged_at": "2022-04-12T10:37:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4058.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4058"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you so much @[lhoestq](https://github.com/lhoestq) for the time you take to your review the PR!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3018 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3018/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3018/comments | https://api.github.com/repos/huggingface/datasets/issues/3018/events | https://github.com/huggingface/datasets/issues/3018 | 1,015,311,877 | I_kwDODunzps48hG4F | 3,018 | Support multiple zipped CSV data files | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 3 | 2021-10-04T15:16:59Z | 2021-10-05T14:32:57Z | null | null | As requested by @lewtun, support loading multiple zipped CSV data files.
```python
from datasets import load_dataset
url = "https://domain.org/filename.zip"
data_files = {"train": "train_filename.csv", "test": "test_filename.csv"}
dataset = load_dataset("csv", data_dir=url, data_files=data_files)
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3018/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3018/timeline | null | null | null | null | false | [
"@lhoestq I would like to draw your attention to the proposed API by @lewtun, using `data_dir` to pass the ZIP URL.\r\n\r\nI'm not totally convinced with this... What do you think?\r\n\r\nMaybe we could discuss other approaches...\r\n\r\nOne brainstorming idea: what about using URL chaining with the hop operator in `data_files`?",
"`data_dir` is currently exclusively used for manually downloaded data.\r\n\r\nMaybe we can have an API that only uses data_files as you are suggesting, using URL chaining ?\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nurl = \"https://domain.org/filename.zip\"\r\ndata_files = {\"train\": \"zip://train_filename.csv::\" + url, \"test\": \"zip://test_filename.csv::\" + url}\r\ndataset = load_dataset(\"csv\", data_files=data_files)\r\n```\r\n\r\nURL chaining is used by `fsspec` to get access to files in nested filesystems of any kind. Since `fsspec` is being used by `pandas`, `dask` and also extensively by `datasets` I think it would be nice to use it here too",
"URL chaining sounds super nice to me! And it's also a nice way to leverage the same concepts we currently have in the docs around `fsspec` :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/5638 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5638/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5638/comments | https://api.github.com/repos/huggingface/datasets/issues/5638/events | https://github.com/huggingface/datasets/issues/5638 | 1,625,564,471 | I_kwDODunzps5g5CU3 | 5,638 | xPath to implement all operations for Path | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 5 | 2023-03-15T13:47:11Z | 2023-03-17T13:21:12Z | 2023-03-17T13:21:12Z | null | ### Feature request
Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which only work locally.
### Motivation
I'm using xPath to interact with remote objects.
### Your contribution
I could try to make a PR. I'm a bit unfamiliar with chaining right now. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5638/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5638/timeline | null | completed | null | null | false | [
" I think https://github.com/fsspec/universal_pathlib is the project you are looking for.\r\n\r\n`xPath` has the methods often used in dataset scripts, and `mkdir` is not one of them (`dl_manager`'s role is to \"interact\" with the file system, so using `mkdir` is discouraged).",
"Right is there a difference between UPath and xPath? Typically is xPath less well implemented compared to Upath, ie missing some implementations of some methods? Or are there methods in xPath that are not implemented with UPath?",
"`xPath` is an internal component (it doesn't have a leading underscore in the name, but it should) not meant to be used outside of `datasets`, and it's only tested on HTTP URLs, not S3.\r\n\r\n",
"Okay I understand that xPath won't support my usecase. What I was perhaps getting to is why not use UPath in `datasets` instead of `xPath` if UPath seems to have strictly more robust implementations.",
"It seems like `universal_pathlib` does not support `fsspec` URL chaining (`::` is the chaining symbol) and \"compression\" filesystems (e.g., `zip`), but this is what we need to access and stream files from within an archive (e.g., we want to stream URLs such as this one: `zip://data.parquet::https://www.dummyurl.com/archive.zip`)"
] |
https://api.github.com/repos/huggingface/datasets/issues/6041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6041/comments | https://api.github.com/repos/huggingface/datasets/issues/6041/events | https://github.com/huggingface/datasets/pull/6041 | 1,807,441,055 | PR_kwDODunzps5Vp0GX | 6,041 | Flatten repository_structure docs on yaml | [] | closed | false | null | 3 | 2023-07-17T10:15:10Z | 2023-07-17T10:24:51Z | 2023-07-17T10:16:22Z | null | To have Splits, Configurations and Builder parameters at the same doc level | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6041/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6041/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6041.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6041",
"merged_at": "2023-07-17T10:16:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6041.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6041"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6041). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007587 / 0.011353 (-0.003766) | 0.004469 / 0.011008 (-0.006540) | 0.098028 / 0.038508 (0.059520) | 0.086378 / 0.023109 (0.063269) | 0.412290 / 0.275898 (0.136392) | 0.449912 / 0.323480 (0.126432) | 0.004769 / 0.007986 (-0.003217) | 0.003708 / 0.004328 (-0.000621) | 0.075541 / 0.004250 (0.071290) | 0.063821 / 0.037052 (0.026768) | 0.417213 / 0.258489 (0.158724) | 0.471954 / 0.293841 (0.178113) | 0.036243 / 0.128546 (-0.092303) | 0.009540 / 0.075646 (-0.066106) | 0.339043 / 0.419271 (-0.080228) | 0.061853 / 0.043533 (0.018320) | 0.418510 / 0.255139 (0.163371) | 0.462372 / 0.283200 (0.179173) | 0.027328 / 0.141683 (-0.114355) | 1.745114 / 1.452155 (0.292959) | 1.879839 / 1.492716 (0.387123) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211042 / 0.018006 (0.193035) | 0.512865 / 0.000490 (0.512375) | 0.008744 / 0.000200 (0.008544) | 0.000121 / 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032493 / 0.037411 (-0.004918) | 0.096472 / 0.014526 (0.081946) | 0.110340 / 0.176557 (-0.066216) | 0.183195 / 0.737135 (-0.553940) | 0.112829 / 0.296338 (-0.183510) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478040 / 0.215209 (0.262830) | 4.743776 / 2.077655 (2.666121) | 2.389770 / 1.504120 (0.885650) | 2.168468 / 1.541195 (0.627274) | 2.238154 / 1.468490 (0.769663) | 0.572308 / 4.584777 (-4.012469) | 4.154783 / 3.745712 (0.409071) | 3.771509 / 5.269862 (-1.498353) | 2.384828 / 4.565676 (-2.180848) | 0.068122 / 0.424275 (-0.356153) | 0.008573 / 0.007607 (0.000965) | 0.560300 / 0.226044 (0.334256) | 5.591163 / 2.268929 (3.322235) | 2.929660 / 55.444624 (-52.514965) | 2.517721 / 6.876477 (-4.358756) | 2.762285 / 2.142072 (0.620213) | 0.687193 / 4.805227 (-4.118034) | 0.157839 / 6.500664 (-6.342825) | 0.071862 / 0.075469 (-0.003607) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.484788 / 1.841788 (-0.357000) | 21.696071 / 8.074308 (13.621763) | 15.476166 / 10.191392 (5.284774) | 0.185034 / 0.680424 (-0.495390) | 0.021181 / 0.534201 (-0.513020) | 0.463324 / 0.579283 (-0.115959) | 0.502455 / 0.434364 (0.068091) | 0.559880 / 0.540337 (0.019543) | 0.767281 / 1.386936 (-0.619655) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007572 / 0.011353 (-0.003781) | 0.004331 / 0.011008 (-0.006677) | 0.075023 / 0.038508 (0.036515) | 0.085474 / 0.023109 (0.062365) | 0.464900 / 0.275898 (0.189002) | 0.503348 / 0.323480 (0.179868) | 0.006885 / 0.007986 (-0.001101) | 0.003647 / 0.004328 (-0.000681) | 0.074874 / 0.004250 (0.070623) | 0.071076 / 0.037052 (0.034024) | 0.465495 / 0.258489 (0.207006) | 0.506418 / 0.293841 (0.212577) | 0.038900 / 0.128546 (-0.089647) | 0.009467 / 0.075646 (-0.066180) | 0.082547 / 0.419271 (-0.336724) | 0.058457 / 0.043533 (0.014924) | 0.459114 / 0.255139 (0.203975) | 0.484872 / 0.283200 (0.201673) | 0.027443 / 0.141683 (-0.114240) | 1.713996 / 1.452155 (0.261841) | 1.893639 / 1.492716 (0.400922) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248693 / 0.018006 (0.230687) | 0.488805 / 0.000490 (0.488315) | 0.000421 / 0.000200 (0.000221) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034886 / 0.037411 (-0.002525) | 0.103215 / 0.014526 (0.088689) | 0.116422 / 0.176557 (-0.060134) | 0.182789 / 0.737135 (-0.554346) | 0.117788 / 0.296338 (-0.178550) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.482782 / 0.215209 (0.267573) | 4.802895 / 2.077655 (2.725241) | 2.489823 / 1.504120 (0.985703) | 2.324005 / 1.541195 (0.782810) | 2.457674 / 1.468490 (0.989184) | 0.566980 / 4.584777 (-4.017797) | 4.117359 / 3.745712 (0.371647) | 3.841180 / 5.269862 (-1.428681) | 2.322410 / 4.565676 (-2.243266) | 0.066367 / 0.424275 (-0.357908) | 0.008501 / 0.007607 (0.000894) | 0.561453 / 0.226044 (0.335408) | 5.694861 / 2.268929 (3.425932) | 3.129829 / 55.444624 (-52.314796) | 2.647375 / 6.876477 (-4.229102) | 2.673071 / 2.142072 (0.530998) | 0.676120 / 4.805227 (-4.129108) | 0.153483 / 6.500664 (-6.347181) | 0.070797 / 0.075469 (-0.004672) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.575697 / 1.841788 (-0.266091) | 22.447462 / 8.074308 (14.373154) | 15.964906 / 10.191392 (5.773514) | 0.218343 / 0.680424 (-0.462081) | 0.021051 / 0.534201 (-0.513150) | 0.466079 / 0.579283 (-0.113204) | 0.493190 / 0.434364 (0.058826) | 0.565929 / 0.540337 (0.025592) | 0.768638 / 1.386936 (-0.618298) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006268 / 0.011353 (-0.005085) | 0.003715 / 0.011008 (-0.007293) | 0.080628 / 0.038508 (0.042120) | 0.070294 / 0.023109 (0.047185) | 0.404749 / 0.275898 (0.128851) | 0.434130 / 0.323480 (0.110650) | 0.005533 / 0.007986 (-0.002452) | 0.002980 / 0.004328 (-0.001349) | 0.063016 / 0.004250 (0.058766) | 0.051667 / 0.037052 (0.014615) | 0.403859 / 0.258489 (0.145370) | 0.437913 / 0.293841 (0.144073) | 0.027518 / 0.128546 (-0.101029) | 0.007991 / 0.075646 (-0.067655) | 0.260723 / 0.419271 (-0.158548) | 0.046580 / 0.043533 (0.003047) | 0.405453 / 0.255139 (0.150314) | 0.428390 / 0.283200 (0.145190) | 0.022774 / 0.141683 (-0.118909) | 1.488204 / 1.452155 (0.036049) | 1.536557 / 1.492716 (0.043841) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185864 / 0.018006 (0.167858) | 0.431388 / 0.000490 (0.430898) | 0.003743 / 0.000200 (0.003543) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024062 / 0.037411 (-0.013350) | 0.075749 / 0.014526 (0.061224) | 0.083519 / 0.176557 (-0.093037) | 0.147965 / 0.737135 (-0.589170) | 0.085635 / 0.296338 (-0.210703) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400455 / 0.215209 (0.185246) | 4.084294 / 2.077655 (2.006640) | 1.928795 / 1.504120 (0.424675) | 1.743205 / 1.541195 (0.202010) | 1.811233 / 1.468490 (0.342743) | 0.504976 / 4.584777 (-4.079801) | 3.073134 / 3.745712 (-0.672578) | 2.816357 / 5.269862 (-2.453505) | 1.857462 / 4.565676 (-2.708214) | 0.058329 / 0.424275 (-0.365946) | 0.006850 / 0.007607 (-0.000757) | 0.466017 / 0.226044 (0.239973) | 4.660158 / 2.268929 (2.391230) | 2.396614 / 55.444624 (-53.048010) | 2.007491 / 6.876477 (-4.868986) | 2.206997 / 2.142072 (0.064925) | 0.592233 / 4.805227 (-4.212994) | 0.125364 / 6.500664 (-6.375300) | 0.061166 / 0.075469 (-0.014303) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.290148 / 1.841788 (-0.551640) | 18.317462 / 8.074308 (10.243154) | 13.465142 / 10.191392 (3.273750) | 0.149696 / 0.680424 (-0.530728) | 0.017120 / 0.534201 (-0.517081) | 0.334818 / 0.579283 (-0.244465) | 0.363976 / 0.434364 (-0.070388) | 0.388271 / 0.540337 (-0.152066) | 0.542383 / 1.386936 (-0.844553) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006029 / 0.011353 (-0.005324) | 0.003656 / 0.011008 (-0.007352) | 0.063518 / 0.038508 (0.025010) | 0.058214 / 0.023109 (0.035105) | 0.435987 / 0.275898 (0.160089) | 0.442769 / 0.323480 (0.119289) | 0.004675 / 0.007986 (-0.003310) | 0.002911 / 0.004328 (-0.001418) | 0.063020 / 0.004250 (0.058769) | 0.049422 / 0.037052 (0.012369) | 0.435521 / 0.258489 (0.177032) | 0.478251 / 0.293841 (0.184411) | 0.027294 / 0.128546 (-0.101252) | 0.008073 / 0.075646 (-0.067574) | 0.068397 / 0.419271 (-0.350875) | 0.044796 / 0.043533 (0.001263) | 0.416646 / 0.255139 (0.161507) | 0.435021 / 0.283200 (0.151821) | 0.024686 / 0.141683 (-0.116997) | 1.495650 / 1.452155 (0.043496) | 1.495846 / 1.492716 (0.003130) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211205 / 0.018006 (0.193199) | 0.414497 / 0.000490 (0.414007) | 0.001704 / 0.000200 (0.001504) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025237 / 0.037411 (-0.012174) | 0.077291 / 0.014526 (0.062765) | 0.085736 / 0.176557 (-0.090821) | 0.141059 / 0.737135 (-0.596076) | 0.087620 / 0.296338 (-0.208719) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421995 / 0.215209 (0.206786) | 4.158503 / 2.077655 (2.080849) | 2.313598 / 1.504120 (0.809479) | 2.183553 / 1.541195 (0.642359) | 2.279656 / 1.468490 (0.811166) | 0.500146 / 4.584777 (-4.084631) | 3.092654 / 3.745712 (-0.653059) | 4.371616 / 5.269862 (-0.898245) | 2.605096 / 4.565676 (-1.960581) | 0.057658 / 0.424275 (-0.366617) | 0.006574 / 0.007607 (-0.001033) | 0.491455 / 0.226044 (0.265411) | 4.926730 / 2.268929 (2.657801) | 2.635749 / 55.444624 (-52.808875) | 2.255780 / 6.876477 (-4.620697) | 2.305547 / 2.142072 (0.163474) | 0.589027 / 4.805227 (-4.216200) | 0.126229 / 6.500664 (-6.374435) | 0.063268 / 0.075469 (-0.012201) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.299102 / 1.841788 (-0.542686) | 18.547417 / 8.074308 (10.473109) | 13.860030 / 10.191392 (3.668638) | 0.145482 / 0.680424 (-0.534942) | 0.016543 / 0.534201 (-0.517658) | 0.330788 / 0.579283 (-0.248496) | 0.362020 / 0.434364 (-0.072344) | 0.380635 / 0.540337 (-0.159703) | 0.517375 / 1.386936 (-0.869561) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4747 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4747/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4747/comments | https://api.github.com/repos/huggingface/datasets/issues/4747/events | https://github.com/huggingface/datasets/pull/4747 | 1,318,586,932 | PR_kwDODunzps48IWKj | 4,747 | Shard parquet in `download_and_prepare` | [] | closed | false | null | 2 | 2022-07-26T18:05:01Z | 2022-09-15T13:43:55Z | 2022-09-15T13:41:26Z | null | Following https://github.com/huggingface/datasets/pull/4724 (needs to be merged first)
It's good practice to shard parquet files to enable parallelism with spark/dask/etc.
I added the `max_shard_size` parameter to `download_and_prepare` (default to 500MB for parquet, and None for arrow).
```python
from datasets import *
output_dir = "./output_dir" # also supports "s3://..."
builder = load_dataset_builder("squad")
builder.download_and_prepare(output_dir, file_format="parquet", max_shard_size="5MB")
```
### Implementation details
The examples are written to a parquet file until `ParquetWriter._num_bytes > max_shard_size`. When this happens, a new writer is instantiated to start writing the next shard. At the end, all the shards are renamed to include the total number of shards in their names: `{builder.name}-{split}-{shard_id:05d}-of-{num_shards:05d}.parquet`
I also added the `MAX_SHARD_SIZE` config variable (default to 500MB)
TODO:
- [x] docstrings
- [x] docs
- [x] tests
cc @severo | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4747/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4747/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4747.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4747",
"merged_at": "2022-09-15T13:41:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4747.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4747"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This is ready for review cc @mariosasko :) please let me know what you think !"
] |
https://api.github.com/repos/huggingface/datasets/issues/5265 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5265/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5265/comments | https://api.github.com/repos/huggingface/datasets/issues/5265/events | https://github.com/huggingface/datasets/issues/5265 | 1,455,274,864 | I_kwDODunzps5Wvbtw | 5,265 | Get an IterableDataset from a map-style Dataset | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | 1 | 2022-11-18T14:54:40Z | 2023-02-01T16:36:03Z | 2023-02-01T16:36:03Z | null | This is useful to leverage iterable datasets specific features like:
- fast approximate shuffling
- lazy map, filter etc.
Iterating over the resulting iterable dataset should be at least as fast at iterating over the map-style dataset.
Here are some ideas regarding the API:
```python
# 1.
# - consistency with load_dataset(..., streaming=True)
# - gives intuition that map/filter/etc. are done on-the-fly
ids = ds.stream()
# 2.
# - more explicit on the output type
# - but maybe sounds like a conversion tool rather than a step in a processing pipeline
ids = ds.as_iterable_dataset()
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5265/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5265/timeline | null | completed | null | null | false | [
"I think `stream` could be misleading since the data is not being streamed from remote endpoints (one could think that's the case when they see `load_dataset` followed by `stream`). Hence, I prefer the second option.\r\n\r\nPS: When we resolve https://github.com/huggingface/datasets/issues/4542, we could add `as_tf_dataset` to the API for consistency and deprecate `to_tf_dataset`."
] |
https://api.github.com/repos/huggingface/datasets/issues/3101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3101/comments | https://api.github.com/repos/huggingface/datasets/issues/3101/events | https://github.com/huggingface/datasets/pull/3101 | 1,028,966,968 | PR_kwDODunzps4tUelE | 3,101 | Update SUPERB to use Audio features | [] | closed | false | null | 1 | 2021-10-18T11:05:18Z | 2021-10-18T12:33:54Z | 2021-10-18T12:06:46Z | null | This is the same dataset refresh as the other Audio ones: https://github.com/huggingface/datasets/pull/3081
cc @patrickvonplaten | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3101/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3101/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3101.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3101",
"merged_at": "2021-10-18T12:06:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3101.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3101"
} | true | [
"Thank you! Sorry I forgot this one @albertvillanova"
] |
https://api.github.com/repos/huggingface/datasets/issues/4054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4054/comments | https://api.github.com/repos/huggingface/datasets/issues/4054/events | https://github.com/huggingface/datasets/pull/4054 | 1,184,575,368 | PR_kwDODunzps41Nwjz | 4,054 | Support float data types in pearsonr/spearmanr metrics | [] | closed | false | null | 1 | 2022-03-29T09:29:10Z | 2022-03-29T14:07:59Z | 2022-03-29T14:02:20Z | null | Fix #4053. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4054/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4054/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4054.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4054",
"merged_at": "2022-03-29T14:02:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4054.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4054"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/215 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/215/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/215/comments | https://api.github.com/repos/huggingface/datasets/issues/215/events | https://github.com/huggingface/datasets/issues/215 | 626,867,879 | MDU6SXNzdWU2MjY4Njc4Nzk= | 215 | NonMatchingSplitsSizesError when loading blog_authorship_corpus | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 10 | 2020-05-28T22:55:19Z | 2023-03-30T15:16:44Z | 2022-02-10T13:05:45Z | null | Getting this error when i run `nlp.load_dataset('blog_authorship_corpus')`.
```
raise NonMatchingSplitsSizesError(str(bad_splits))
nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train',
num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'),
'recorded': SplitInfo(name='train', num_bytes=616473500, num_examples=536323,
dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation',
num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'),
'recorded': SplitInfo(name='validation', num_bytes=30786661, num_examples=27766,
dataset_name='blog_authorship_corpus')}]
```
Upon checking it seems like there is a disparity between the information in `datasets/blog_authorship_corpus/dataset_infos.json` and what was downloaded. Although I can get away with this by passing `ignore_verifications=True` in `load_dataset`, I'm thinking doing so might give problems later on. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/215/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/215/timeline | null | completed | null | null | false | [
"I just ran it on colab and got this\r\n```\r\n[{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812,\r\ndataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train',\r\nnum_bytes=611607465, num_examples=533285, dataset_name='blog_authorship_corpus')},\r\n{'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277,\r\ndataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation',\r\nnum_bytes=35652716, num_examples=30804, dataset_name='blog_authorship_corpus')}]\r\n```\r\nwhich is different from the `dataset_infos.json` and also different from yours.\r\n\r\nIt looks like the script for generating examples is not consistent",
"The files provided by the authors are corrupted and the script seems to ignore the xml files that can't be decoded (it does `try:... except UnicodeDecodeError`). Maybe depending of the environment some files can be opened and some others don't but not sure why",
"Feel free to do `ignore_verifications=True` for now... The verifications only include a check on the checksums of the downloaded files, and a check on the number of examples in each splits.",
"I'm getting this same issue when loading the `imdb` corpus via `dataset = load_dataset(\"imdb\")`. When I try `ignore_verifications=True`, no examples are read into the `train` portion of the dataset. ",
"> I'm getting this same issue when loading the `imdb` corpus via `dataset = load_dataset(\"imdb\")`. When I try `ignore_verifications=True`, no examples are read into the `train` portion of the dataset.\r\n\r\nWhen the checksums don't match, it may mean that the file you downloaded is corrupted. In this case you can try to load the dataset again `load_dataset(\"imdb\", download_mode=\"force_redownload\")`\r\n\r\nAlso I just checked on my side and it worked fine:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imdb\")\r\nprint(len(dataset[\"train\"]))\r\n# 25000\r\n```\r\n\r\nLet me know if redownloading fixes your issue @EmilyAlsentzer .\r\nIf not, feel free to open a separate issue.",
"It doesn't seem to fix the problem. I'll open a separate issue. Thanks. ",
"I wasn't aware of the \"force_redownload\" option and manually removed the '/home/me/.cache/huggingface/datasets/' dir, this worked for me (dataset 'cnn_dailymail')",
"Yes I think this might not be documented well enough. Let’s add it to the doc @lhoestq @SBrandeis.\r\nAnd everything on how to control the cache behavior better (removing, overriding, changing the path, etc)",
"Already fixed:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"blog_authorship_corpus\")\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'date', 'gender', 'age', 'horoscope', 'job'],\r\n num_rows: 689793\r\n })\r\n validation: Dataset({\r\n features: ['text', 'date', 'gender', 'age', 'horoscope', 'job'],\r\n num_rows: 37919\r\n })\r\n})\r\n",
"In my case, I had to remove the cache datasets directory completely as @putssander suggested, the download_mode='forced_redownload' was insufficient.\r\n\r\nI had a private repository with data files that I loaded with a loading script. It was working fine until I pushed a new version of the data files and then the NonMatchingSplitsSizesError was raised.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/116/comments | https://api.github.com/repos/huggingface/datasets/issues/116/events | https://github.com/huggingface/datasets/issues/116 | 618,628,264 | MDU6SXNzdWU2MTg2MjgyNjQ= | 116 | 🐛 Trying to use ROUGE metric : pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 | [
{
"color": "25b21e",
"default": false,
"description": "A bug in a metric script",
"id": 2067393914,
"name": "metric bug",
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug"
}
] | closed | false | null | 5 | 2020-05-15T01:12:06Z | 2020-05-28T23:43:07Z | 2020-05-28T23:43:07Z | null | I'm trying to use rouge metric.
I have to files : `test.pred.tokenized` and `test.gold.tokenized` with each line containing a sentence.
I tried :
```python
import nlp
rouge = nlp.load_metric('rouge')
with open("test.pred.tokenized") as p, open("test.gold.tokenized") as g:
for lp, lg in zip(p, g):
rouge.add(lp, lg)
```
But I meet following error :
> pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323
---
Full stack-trace :
```
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/metric.py", line 224, in add
self.writer.write_batch(batch)
File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/arrow_writer.py", line 148, in write_batch
pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema)
File "pyarrow/table.pxi", line 1550, in pyarrow.lib.Table.from_pydict
File "pyarrow/table.pxi", line 1503, in pyarrow.lib.Table.from_arrays
File "pyarrow/public-api.pxi", line 390, in pyarrow.lib.pyarrow_wrap_table
File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323
```
(`nlp` installed from source) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/116/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/116/timeline | null | completed | null | null | false | [
"Can you share your data files or a minimally reproducible example?",
"Sure, [here is a Colab notebook](https://colab.research.google.com/drive/1uiS89fnHMG7HV_cYxp3r-_LqJQvNNKs9?usp=sharing) reproducing the error.\r\n\r\n> ArrowInvalid: Column 1 named references expected length 36 but got length 56",
"This is because `add` takes as input a batch of elements and you provided only one. I think we should have `add` for one prediction/reference and `add_batch` for a batch of predictions/references. This would make it more coherent with the way we use Arrow.\r\n\r\nLet me do this change",
"Thanks for noticing though. I was mainly used to do `.compute` directly ^^",
"Thanks @lhoestq it works :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/5787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5787/comments | https://api.github.com/repos/huggingface/datasets/issues/5787/events | https://github.com/huggingface/datasets/pull/5787 | 1,680,965,959 | PR_kwDODunzps5O_KNU | 5,787 | Fix inferring module for unsupported data files | [] | closed | false | null | 4 | 2023-04-24T10:44:50Z | 2023-04-27T13:06:01Z | 2023-04-27T12:57:28Z | null | This PR raises a FileNotFoundError instead:
```
FileNotFoundError: No (supported) data files or dataset script found in <dataset_name>
```
Fix #5785. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5787/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5787/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5787.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5787",
"merged_at": "2023-04-27T12:57:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5787.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5787"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I think you can revert the last commit - it should fail if data_files={} IMO",
"The validation of non-empty data_files is addressed in this PR:\r\n- #5802",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008622 / 0.011353 (-0.002730) | 0.005970 / 0.011008 (-0.005038) | 0.117797 / 0.038508 (0.079289) | 0.040955 / 0.023109 (0.017846) | 0.419538 / 0.275898 (0.143640) | 0.455816 / 0.323480 (0.132336) | 0.006481 / 0.007986 (-0.001505) | 0.004507 / 0.004328 (0.000178) | 0.089073 / 0.004250 (0.084822) | 0.052389 / 0.037052 (0.015337) | 0.420053 / 0.258489 (0.161564) | 0.466886 / 0.293841 (0.173045) | 0.042660 / 0.128546 (-0.085886) | 0.014673 / 0.075646 (-0.060973) | 0.411229 / 0.419271 (-0.008042) | 0.076993 / 0.043533 (0.033460) | 0.431693 / 0.255139 (0.176554) | 0.446283 / 0.283200 (0.163084) | 0.131408 / 0.141683 (-0.010275) | 1.820339 / 1.452155 (0.368184) | 1.952946 / 1.492716 (0.460230) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246543 / 0.018006 (0.228537) | 0.489806 / 0.000490 (0.489317) | 0.013999 / 0.000200 (0.013800) | 0.000323 / 0.000054 (0.000269) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032541 / 0.037411 (-0.004870) | 0.130569 / 0.014526 (0.116043) | 0.139630 / 0.176557 (-0.036926) | 0.217018 / 0.737135 (-0.520118) | 0.147914 / 0.296338 (-0.148425) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.494767 / 0.215209 (0.279558) | 4.949313 / 2.077655 (2.871658) | 2.277023 / 1.504120 (0.772903) | 2.036677 / 1.541195 (0.495482) | 2.064461 / 1.468490 (0.595970) | 0.842484 / 4.584777 (-3.742293) | 4.720646 / 3.745712 (0.974934) | 4.025673 / 5.269862 (-1.244189) | 2.198606 / 4.565676 (-2.367070) | 0.103042 / 0.424275 (-0.321233) | 0.014794 / 0.007607 (0.007187) | 0.617867 / 0.226044 (0.391822) | 6.197146 / 2.268929 (3.928218) | 2.804927 / 55.444624 (-52.639697) | 2.426420 / 6.876477 (-4.450057) | 2.515182 / 2.142072 (0.373109) | 1.008098 / 4.805227 (-3.797129) | 0.204982 / 6.500664 (-6.295682) | 0.078643 / 0.075469 (0.003174) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.490790 / 1.841788 (-0.350997) | 17.268042 / 8.074308 (9.193734) | 17.129647 / 10.191392 (6.938255) | 0.170351 / 0.680424 (-0.510073) | 0.021317 / 0.534201 (-0.512884) | 0.517068 / 0.579283 (-0.062215) | 0.500200 / 0.434364 (0.065836) | 0.641974 / 0.540337 (0.101637) | 0.763984 / 1.386936 (-0.622952) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008358 / 0.011353 (-0.002995) | 0.005710 / 0.011008 (-0.005298) | 0.091077 / 0.038508 (0.052569) | 0.040413 / 0.023109 (0.017303) | 0.416634 / 0.275898 (0.140736) | 0.451122 / 0.323480 (0.127642) | 0.006417 / 0.007986 (-0.001569) | 0.004360 / 0.004328 (0.000032) | 0.089543 / 0.004250 (0.085292) | 0.051137 / 0.037052 (0.014085) | 0.420228 / 0.258489 (0.161739) | 0.458649 / 0.293841 (0.164808) | 0.041828 / 0.128546 (-0.086718) | 0.014268 / 0.075646 (-0.061379) | 0.105301 / 0.419271 (-0.313970) | 0.058931 / 0.043533 (0.015398) | 0.413445 / 0.255139 (0.158306) | 0.443882 / 0.283200 (0.160682) | 0.124946 / 0.141683 (-0.016737) | 1.842259 / 1.452155 (0.390104) | 1.948162 / 1.492716 (0.455445) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235799 / 0.018006 (0.217792) | 0.487667 / 0.000490 (0.487177) | 0.001112 / 0.000200 (0.000912) | 0.000094 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034233 / 0.037411 (-0.003178) | 0.136593 / 0.014526 (0.122068) | 0.145598 / 0.176557 (-0.030959) | 0.206545 / 0.737135 (-0.530590) | 0.150781 / 0.296338 (-0.145558) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.522345 / 0.215209 (0.307136) | 5.192092 / 2.077655 (3.114438) | 2.543182 / 1.504120 (1.039062) | 2.285212 / 1.541195 (0.744018) | 2.312803 / 1.468490 (0.844313) | 0.859334 / 4.584777 (-3.725443) | 4.620235 / 3.745712 (0.874523) | 3.964060 / 5.269862 (-1.305802) | 2.046347 / 4.565676 (-2.519330) | 0.105284 / 0.424275 (-0.318991) | 0.015051 / 0.007607 (0.007444) | 0.646530 / 0.226044 (0.420485) | 6.386396 / 2.268929 (4.117467) | 3.131833 / 55.444624 (-52.312791) | 2.761898 / 6.876477 (-4.114579) | 2.833216 / 2.142072 (0.691143) | 1.026024 / 4.805227 (-3.779204) | 0.206776 / 6.500664 (-6.293888) | 0.078845 / 0.075469 (0.003376) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.580851 / 1.841788 (-0.260937) | 17.826213 / 8.074308 (9.751905) | 16.929460 / 10.191392 (6.738068) | 0.232483 / 0.680424 (-0.447941) | 0.021123 / 0.534201 (-0.513078) | 0.522196 / 0.579283 (-0.057087) | 0.503495 / 0.434364 (0.069131) | 0.622777 / 0.540337 (0.082440) | 0.753272 / 1.386936 (-0.633664) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1926/comments | https://api.github.com/repos/huggingface/datasets/issues/1926/events | https://github.com/huggingface/datasets/pull/1926 | 813,607,994 | MDExOlB1bGxSZXF1ZXN0NTc3NzI4Mjgy | 1,926 | Fix: Wiki_dpr - add missing scalar quantizer | [] | closed | false | null | 0 | 2021-02-22T15:32:05Z | 2021-02-22T15:49:54Z | 2021-02-22T15:49:53Z | null | All the prebuilt wiki_dpr indexes already use SQ8, I forgot to update the wiki_dpr script after building them. Now it's finally done.
The scalar quantizer SQ8 doesn't reduce the performance of the index as shown in retrieval experiments on RAG.
The quantizer reduces the size of the index a lot but increases index building time. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1926/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1926/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1926.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1926",
"merged_at": "2021-02-22T15:49:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1926.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1926"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2383/comments | https://api.github.com/repos/huggingface/datasets/issues/2383/events | https://github.com/huggingface/datasets/pull/2383 | 895,779,723 | MDExOlB1bGxSZXF1ZXN0NjQ3OTU4MTQ0 | 2,383 | Improve example in rounding docs | [] | closed | false | null | 0 | 2021-05-19T18:59:23Z | 2021-05-21T12:53:22Z | 2021-05-21T12:36:29Z | null | Improves the example in the rounding subsection of the Split API docs. With this change, it should more clear what's the difference between the `closest` and the `pct1_dropremainder` rounding. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2383/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2383/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2383.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2383",
"merged_at": "2021-05-21T12:36:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2383.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2383"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5261 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5261/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5261/comments | https://api.github.com/repos/huggingface/datasets/issues/5261/events | https://github.com/huggingface/datasets/issues/5261 | 1,454,647,861 | I_kwDODunzps5WtCo1 | 5,261 | Add PubTables-1M | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 1 | 2022-11-18T07:56:36Z | 2022-11-18T08:02:18Z | null | null | ### Name
PubTables-1M
### Paper
https://openaccess.thecvf.com/content/CVPR2022/html/Smock_PubTables-1M_Towards_Comprehensive_Table_Extraction_From_Unstructured_Documents_CVPR_2022_paper.html
### Data
https://github.com/microsoft/table-transformer
### Motivation
Table Transformer is now available in 🤗 Transformer, and it was trained on PubTables-1M. It's a large dataset for table extraction and structure recognition in unstructured documents. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5261/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5261/timeline | null | null | null | null | false | [
"cc @albertvillanova the author would like to add this dataset to the hub: https://github.com/microsoft/table-transformer/issues/68#issuecomment-1319114621. Could you help him out?"
] |
https://api.github.com/repos/huggingface/datasets/issues/3176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3176/comments | https://api.github.com/repos/huggingface/datasets/issues/3176/events | https://github.com/huggingface/datasets/pull/3176 | 1,039,068,312 | PR_kwDODunzps4t00xS | 3,176 | OpenSLR dataset: update generate_examples to properly extract data for SLR83 | [] | closed | false | null | 1 | 2021-10-29T00:59:27Z | 2021-11-04T16:20:45Z | 2021-10-29T10:04:09Z | null | Fixed #3168.
The SLR38 indices are CSV files and there wasn't any code in openslr.py to process these files properly. The end result was an empty table.
I've added code to properly process these CSV files. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3176/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3176/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3176.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3176",
"merged_at": "2021-10-29T10:04:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3176.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3176"
} | true | [
"Also fix #3125."
] |
https://api.github.com/repos/huggingface/datasets/issues/4706 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4706/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4706/comments | https://api.github.com/repos/huggingface/datasets/issues/4706/events | https://github.com/huggingface/datasets/pull/4706 | 1,308,198,454 | PR_kwDODunzps47lNBg | 4,706 | Fix empty examples in xtreme dataset for bucc18 config | [] | closed | false | null | 2 | 2022-07-18T16:22:46Z | 2022-07-19T06:41:14Z | 2022-07-19T06:29:17Z | null | As reported in https://huggingface.co/muibk, there are empty examples in xtreme/bucc18.de
I applied your fix @mustaszewski
I also used a dict to make the dataset generation much faster | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4706/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4706/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4706.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4706",
"merged_at": "2022-07-19T06:29:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4706.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4706"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I guess the report link is this instead: https://huggingface.co/datasets/xtreme/discussions/1"
] |
https://api.github.com/repos/huggingface/datasets/issues/3843 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3843/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3843/comments | https://api.github.com/repos/huggingface/datasets/issues/3843/events | https://github.com/huggingface/datasets/pull/3843 | 1,161,397,812 | PR_kwDODunzps40Cm0D | 3,843 | Fix Google Drive URL to avoid Virus scan warning in streaming mode | [] | closed | false | null | 2 | 2022-03-07T13:09:19Z | 2022-03-15T12:30:25Z | 2022-03-15T12:30:23Z | null | The streaming version of https://github.com/huggingface/datasets/pull/3787.
Fix #3835
CC: @albertvillanova | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3843/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3843/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3843.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3843",
"merged_at": "2022-03-15T12:30:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3843.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3843"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3843). All of your documentation changes will be reflected on that endpoint.",
"Cool ! Looks like it breaks `test_streaming_gg_drive_gzipped` for some reason..."
] |
https://api.github.com/repos/huggingface/datasets/issues/3154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3154/comments | https://api.github.com/repos/huggingface/datasets/issues/3154/events | https://github.com/huggingface/datasets/issues/3154 | 1,034,361,806 | I_kwDODunzps49pxvO | 3,154 | Sacrebleu unexpected behaviour/requirement for data format | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-10-24T08:55:33Z | 2021-10-31T09:08:32Z | 2021-10-31T09:08:31Z | null | ## Describe the bug
When comparing with the original `sacrebleu` implementation, the `datasets` implementation does some strange things that I do not quite understand. This issue was triggered when I was trying to implement TER and found the datasets implementation of BLEU [here](https://github.com/huggingface/datasets/pull/3153).
In the below snippet, the original sacrebleu snippet works just fine whereas the datasets implementation throws an error.
## Steps to reproduce the bug
```python
import sacrebleu
import datasets
refs = [
['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'],
['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.'],
]
hyps = ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.']
expected_bleu = 48.530827
ds_bleu = datasets.load_metric("sacrebleu")
bleu_score_sb = sacrebleu.corpus_bleu(hyps, refs).score
print(bleu_score_sb, expected_bleu)
# works: 48.5308...
bleu_score_ds = ds_bleu.compute(predictions=hyps, references=refs)["score"]
print(bleu_score_ds, expected_bleu)
# ValueError: Predictions and/or references don't match the expected format.
```
This seems to be related to how datasets forces the features format here:
https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/metrics/sacrebleu/sacrebleu.py#L94-L99
and then manipulates the references during the compute stage here
https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/metrics/sacrebleu/sacrebleu.py#L119-L122
I do not quite understand why that is required since sacrebleu handles argument parsing quite well [by itself](https://github.com/mjpost/sacrebleu/blob/2787185dd0f8d224c72ee5a831d163c2ac711a47/sacrebleu/metrics/base.py#L229).
## Actual results
Traceback (most recent call last):
File "C:\Users\bramv\AppData\Roaming\JetBrains\PyCharm2020.3\scratches\scratch_23.py", line 23, in <module>
bleu_score_ds = ds_bleu.compute(predictions=hyps, references=refs)["score"]
File "C:\dev\python\datasets\src\datasets\metric.py", line 392, in compute
self.add_batch(predictions=predictions, references=references)
File "C:\dev\python\datasets\src\datasets\metric.py", line 439, in add_batch
raise ValueError(
ValueError: Predictions and/or references don't match the expected format.
Expected format: {'predictions': Value(dtype='string', id='sequence'), 'references': Sequence(feature=Value(dtype='string', id='sequence'), length=-1, id='references')},
Input predictions: ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.'],
Input references: [['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'], ['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.']]
## Environment info
- `datasets` version: 1.14.1.dev0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.2
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3154/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3154/timeline | null | completed | null | null | false | [
"Hi @BramVanroy!\r\n\r\nGood question. This project relies on PyArrow (tables) to store data too big to fit in RAM. In the case of metrics, this means that the number of predictions and references has to match to form a table.\r\n\r\nThat's why your example throws an error even though it matches the schema:\r\n```python\r\nrefs = [\r\n ['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'],\r\n ['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.'],\r\n] # len(refs) = 2\r\n\r\nhyps = ['The dog bit the man.', \"It wasn't surprising.\", 'The man had just bitten him.'] # len(hyps) = 3\r\n```\r\n\r\nInstead, it should be:\r\n```python\r\nrefs = [\r\n ['The dog bit the man.', 'The dog had bit the man.'],\r\n ['It was not unexpected.', 'No one was surprised.'],\r\n ['The man bit him first.', 'The man had bitten the dog.'], \r\n] # len(refs) = 3\r\n\r\nhyps = ['The dog bit the man.', \"It wasn't surprising.\", 'The man had just bitten him.'] # len(hyps) = 3\r\n```\r\n\r\nHowever, `sacreblue` works with the format that's described in your example, hence this part:\r\nhttps://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/metrics/sacrebleu/sacrebleu.py#L94-L99\r\n\r\nHope you get an idea!",
"Thanks, that makes sense. It is a bit unfortunate because it may be confusing to users since the input format is suddenly different than what they may expect from the underlying library/metric. But it is understandable due to how `datasets` works!"
] |
https://api.github.com/repos/huggingface/datasets/issues/1641 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1641/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1641/comments | https://api.github.com/repos/huggingface/datasets/issues/1641/events | https://github.com/huggingface/datasets/issues/1641 | 775,110,872 | MDU6SXNzdWU3NzUxMTA4NzI= | 1,641 | muchocine dataset cannot be dowloaded | [
{
"color": "ffffff",
"default": true,
"description": "This will not be worked on",
"id": 1935892913,
"name": "wontfix",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix"
},
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 5 | 2020-12-27T21:26:28Z | 2021-08-03T05:07:29Z | 2021-08-03T05:07:29Z | null | ```python
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)
267 try:
--> 268 local_path = cached_path(file_path, download_config=download_config)
269 except FileNotFoundError:
7 frames
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/muchocine/muchocine.py
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last)
FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/muchocine/muchocine.py
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)
281 raise FileNotFoundError(
282 "Couldn't find file locally at {}, or remotely at {} or {}".format(
--> 283 combined_path, github_file_path, file_path
284 )
285 )
FileNotFoundError: Couldn't find file locally at muchocine/muchocine.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/muchocine/muchocine.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/muchocine/muchocine.py
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1641/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1641/timeline | null | completed | null | null | false | [
"I have encountered the same error with `v1.0.1` and `v1.0.2` on both Windows and Linux environments. However, cloning the repo and using the path to the dataset's root directory worked for me. Even after having the dataset cached - passing the path is the only way (for now) to load the dataset.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"squad\") # Works\r\ndataset = load_dataset(\"code_search_net\", \"python\") # Error\r\ndataset = load_dataset(\"covid_qa_deepset\") # Error\r\n\r\npath = \"/huggingface/datasets/datasets/{}/\"\r\ndataset = load_dataset(path.format(\"code_search_net\"), \"python\") # Works\r\ndataset = load_dataset(path.format(\"covid_qa_deepset\")) # Works\r\n```\r\n\r\n",
"Hi @mrm8488 and @amoux!\r\n The datasets you are trying to load have been added to the library during the community sprint for v2 last month. They will be available with the v2 release!\r\nFor now, there are still a couple of solutions to load the datasets:\r\n1. As suggested by @amoux, you can clone the git repo and pass the local path to the script\r\n2. You can also install the latest (master) version of `datasets` using pip: `pip install git+https://github.com/huggingface/datasets.git@master`",
"If you don't want to clone entire `datasets` repo, just download the `muchocine` directory and pass the local path to the directory. Cheers!",
"Muchocine was added recently, that's why it wasn't available yet.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `muchocine` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"muchocine\", split=\"train\")\r\n```",
"Thanks @lhoestq "
] |
https://api.github.com/repos/huggingface/datasets/issues/2549 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2549/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2549/comments | https://api.github.com/repos/huggingface/datasets/issues/2549/events | https://github.com/huggingface/datasets/issues/2549 | 929,819,093 | MDU6SXNzdWU5Mjk4MTkwOTM= | 2,549 | Handling unlabeled datasets | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 2 | 2021-06-25T04:32:23Z | 2021-06-25T21:07:57Z | 2021-06-25T21:07:56Z | null | Hi!
Is there a way for datasets to produce unlabeled instances (e.g., the `ClassLabel` can be nullable).
For example, I want to use the MNLI dataset reader ( https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py ) on a file that doesn't have the `gold_label` field. I tried setting `"label": data.get("gold_label")`, but got the following error:
```
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/load.py", line 748, in load_dataset
use_auth_token=use_auth_token,
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 989, in _prepare_split
example = self.info.features.encode_example(record)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 953, in encode_example
return encode_nested_example(self, example)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 848, in encode_nested_example
k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 848, in <dictcomp>
k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 875, in encode_nested_example
return schema.encode_example(obj)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 653, in encode_example
if not -1 <= example_data < self.num_classes:
TypeError: '<=' not supported between instances of 'int' and 'NoneType'
```
What's the proper way to handle reading unlabeled datasets, especially for downstream usage with Transformers? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2549/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2549/timeline | null | completed | null | null | false | [
"Hi @nelson-liu,\r\n\r\nYou can pass the parameter `features` to `load_dataset`: https://huggingface.co/docs/datasets/_modules/datasets/load.html#load_dataset\r\n\r\nIf you look at the code of the MNLI script you referred in your question (https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py#L62-L77), you can see how the Features were originally specified. \r\n\r\nFeel free to use it as a template, customize it and pass it to `load_dataset` using the parameter `features`.",
"ah got it, thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2425 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2425/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2425/comments | https://api.github.com/repos/huggingface/datasets/issues/2425/events | https://github.com/huggingface/datasets/pull/2425 | 906,385,457 | MDExOlB1bGxSZXF1ZXN0NjU3NDAwMjM3 | 2,425 | Fix Docstring Mistake: dataset vs. metric | [] | closed | false | null | 4 | 2021-05-29T06:09:53Z | 2021-06-01T08:18:04Z | 2021-06-01T08:18:04Z | null | PR to fix #2412 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2425/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2425/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2425.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2425",
"merged_at": "2021-06-01T08:18:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2425.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2425"
} | true | [
"IMO this PR is ready for review. I do not know why tests fail...",
"The CI fail is unrelated to this PR, and it has been fixed on master, merging :)",
"> I just have one comment: we use rouge, not rogue :p\r\n\r\nOops!",
"rebased on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/3979 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3979/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3979/comments | https://api.github.com/repos/huggingface/datasets/issues/3979/events | https://github.com/huggingface/datasets/pull/3979 | 1,175,258,969 | PR_kwDODunzps40u8NY | 3,979 | Fix google drive streaming for small files | [] | closed | false | null | 4 | 2022-03-21T11:38:46Z | 2022-03-24T16:59:11Z | 2022-03-21T14:25:58Z | null | Google drive did another change recently, following #3787 #3843 .
In particular Google Drive now returns 403 for GET requests with `confirm=t` when a files doesn't have a virus warning message. I fixed this by passing `confirm=t` if and only if when there is one (i.e. when status code is 200 for HEAD) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3979/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3979/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3979.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3979",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3979.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3979"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Actually the CI fails because of this\r\n\r\n\r\nIt looks like we can't have a proper way to test google drive in the CI right now. Though it seems to work locally if you're not banned. I think I'll just disable those tests for now",
"this fix will not be included?",
"No we can't do anything except stop using google drive when possible"
] |
https://api.github.com/repos/huggingface/datasets/issues/5887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5887/comments | https://api.github.com/repos/huggingface/datasets/issues/5887/events | https://github.com/huggingface/datasets/issues/5887 | 1,722,166,382 | I_kwDODunzps5mpixu | 5,887 | HuggingsFace dataset example give error | [] | closed | false | null | 4 | 2023-05-23T14:09:05Z | 2023-07-25T14:01:01Z | 2023-07-25T14:01:00Z | null | ### Describe the bug


### Steps to reproduce the bug
Use link as reference document written https://colab.research.google.com/github/huggingface/datasets/blob/main/notebooks/Overview.ipynb#scrollTo=biqDH9vpvSVz
```python
# Now let's train our model
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.train().to(device)
for i, batch in enumerate(dataloader):
batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
model.zero_grad()
print(f'Step {i} - loss: {loss:.3}')
if i > 5:
break
```
Error
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-44-7040b885f382>](https://localhost:8080/#) in <cell line: 5>()
5 for i, batch in enumerate(dataloader):
6 batch.to(device)
----> 7 outputs = model(**batch)
8 loss = outputs.loss
9 loss.backward()
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: DistilBertForQuestionAnswering.forward() got an unexpected keyword argument 'token_type_ids'
```
https://github.com/huggingface/datasets/assets/1328316/5d8b1d61-9337-4d59-8423-4f37f834c156
### Expected behavior
Run success on Google Colab (free)
### Environment info
Windows 11 x64, Google Colab free (my Google Drive just empty about 200 MB, but I don't think it cause problem) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5887/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5887/timeline | null | completed | null | null | false | [
"Nice catch @donhuvy, that's because some models don't need the `token_type_ids`, as in this case, as the example is using `distilbert-base-cased`, and according to the DistilBert documentation at https://huggingface.co/transformers/v3.0.2/model_doc/distilbert.html, `DistilBert doesn’t have token_type_ids, you don’t need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or [SEP])`. `token_type_ids` are neither required in some other well known models such as RoBERTa. \r\n\r\nHere the issue comes due to a mismatch between the tokenizer and the model, as the Colab is using a BERT tokenizer (`bert-base-cased`), while the model is a DistilBERT (`distilbert-base-cased`), so aligning the tokenizer and the model solves it!",
"#self-assign",
"@donhuvy I've created https://github.com/huggingface/datasets/pull/5902 to solve it! 🤗",
"This has been addressed in #5902.\r\n\r\nThe Quicktour notebook is deprecated now - please use the notebook version of the [Quickstart doc page](https://huggingface.co/docs/datasets/main/en/quickstart) instead (\"Open in Colab\" button)."
] |
https://api.github.com/repos/huggingface/datasets/issues/2220 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2220/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2220/comments | https://api.github.com/repos/huggingface/datasets/issues/2220/events | https://github.com/huggingface/datasets/pull/2220 | 857,774,626 | MDExOlB1bGxSZXF1ZXN0NjE1MTM4NDQz | 2,220 | Fix infinite loop in WindowsFileLock | [
{
"color": "ffffff",
"default": true,
"description": "This will not be worked on",
"id": 1935892913,
"name": "wontfix",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix"
}
] | closed | false | null | 4 | 2021-04-14T10:49:58Z | 2021-04-14T14:59:50Z | 2021-04-14T14:59:34Z | null | Raise exception to avoid infinite loop. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2220/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2220/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2220.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2220",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2220.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2220"
} | true | [
"How is it possible to get an infinite loop ? Can you add more details ?",
"Yes, in Windows, if the filename is too long, a `FileNotFoundError` is raised. The exception should be raised in this case. Otherwise, we get into an infinite loop.\r\n\r\nIf other process has the file locked, then `PermissionError` is raised. In this case, `pass` is OK.",
"Note that the filelock module comes from this project that hasn't changed in years - while still being used by ten of thousands of projects:\r\nhttps://github.com/benediktschmitt/py-filelock\r\n\r\nUnless we have proper tests for this, I wouldn't recommend to change it",
"I'm pretty sure many things from the library could break for windows users that haven't disabled the max path length limit.\r\nMaybe it would be simpler to simply raise an error on startup. For exampe, for windows users the error could ask them to disable the limit if it's not been disabled yet ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/3107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3107/comments | https://api.github.com/repos/huggingface/datasets/issues/3107/events | https://github.com/huggingface/datasets/pull/3107 | 1,030,357,527 | PR_kwDODunzps4tYyhF | 3,107 | Add paper BibTeX citation | [] | closed | false | null | 0 | 2021-10-19T14:08:11Z | 2021-10-19T14:26:22Z | 2021-10-19T14:26:21Z | null | Add paper BibTeX citation to README file. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3107/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3107/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3107.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3107",
"merged_at": "2021-10-19T14:26:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3107.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3107"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4936/comments | https://api.github.com/repos/huggingface/datasets/issues/4936/events | https://github.com/huggingface/datasets/issues/4936 | 1,363,274,907 | I_kwDODunzps5RQeyb | 4,936 | vivos (Vietnamese speech corpus) dataset not accessible | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 3 | 2022-09-06T13:17:55Z | 2022-09-21T06:06:02Z | 2022-09-12T07:14:20Z | null | ## Describe the bug
VIVOS data is not accessible anymore, neither of these links work (at least from France):
* https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (data)
* https://ailab.hcmus.edu.vn/vivos (dataset page)
Therefore `load_dataset` doesn't work.
## Steps to reproduce the bug
```python
ds = load_dataset("vivos")
```
## Expected results
dataset loaded
## Actual results
```
ConnectionError: Couldn't reach https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='ailab.hcmus.edu.vn', port=443): Max retries exceeded with url: /assets/vivos.tar.gz (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f9d8a27d190>: Failed to establish a new connection: [Errno -5] No address associated with hostname'))")))
```
Will try to contact the authors, as we wanted to use Vivos as an example in documentation on how to create scripts for audio datasets (https://github.com/huggingface/datasets/pull/4872), because it's small and straightforward and uses tar archives. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4936/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4936/timeline | null | completed | null | null | false | [
"If you need an example of a small audio datasets, I just created few hours ago a speech dataset with only 300MB of compressed audio files https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia. It works also with streaming (@albertvillanova helped me adding this functionality) :-)",
"@cahya-wirawan omg this is awesome!! thank you! ",
"We have contacted the authors to ask them."
] |
https://api.github.com/repos/huggingface/datasets/issues/5594 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5594/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5594/comments | https://api.github.com/repos/huggingface/datasets/issues/5594/events | https://github.com/huggingface/datasets/issues/5594 | 1,603,980,995 | I_kwDODunzps5fms7D | 5,594 | Error while downloading the xtreme udpos dataset | [] | closed | false | null | 3 | 2023-02-28T23:40:53Z | 2023-07-24T14:22:18Z | 2023-07-24T14:22:18Z | null | ### Describe the bug
Hi,
I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed
```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtreme/udpos.Arabic/1.0.0/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4...
Downloading data: 16%|██████████████▏ | 56.9M/355M [03:11<16:43, 297kB/s]
Generating train split: 0%| | 0/6075 [00:00<?, ? examples/s]Traceback (most recent call last):
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1608, in _prepare_split_single
for key, record in generator:
File "/home/skhanuja/.cache/huggingface/modules/datasets_modules/datasets/xtreme/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4/xtreme.py", line 732, in _generate_examples
yield from UdposParser.generate_examples(config=self.config, filepath=filepath, **kwargs)
File "/home/skhanuja/.cache/huggingface/modules/datasets_modules/datasets/xtreme/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4/xtreme.py", line 921, in generate_examples
for path, file in filepath:
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 158, in __iter__
yield from self.generator(*self.args, **self.kwargs)
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 211, in _iter_from_path
yield from cls._iter_tar(f)
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 167, in _iter_tar
for tarinfo in stream:
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/tarfile.py", line 2475, in __iter__
tarinfo = self.next()
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/tarfile.py", line 2344, in next
raise ReadError("unexpected end of data")
tarfile.ReadError: unexpected end of data
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py", line 855, in <module>
main()
File "/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py", line 487, in main
train_dataset = load_dataset(dataset_name, source_language, split="train", cache_dir=args.cache_dir, download_mode="force_redownload")
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset
builder_instance.download_and_prepare(
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 872, in download_and_prepare
self._download_and_prepare(
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1649, in _download_and_prepare
super()._download_and_prepare(
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 967, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1488, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1644, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
```
train_dataset = load_dataset('xtreme', 'udpos.English', split="train", cache_dir=args.cache_dir, download_mode="force_redownload")
```
### Expected behavior
Download the udpos dataset
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-3.10.0-957.1.3.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5594/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5594/timeline | null | completed | null | null | false | [
"Hi! I cannot reproduce this error on my machine.\r\n\r\nThe raised error could mean that one of the downloaded files is corrupted. To verify this is not the case, you can run `load_dataset` as follows:\r\n```python\r\ntrain_dataset = load_dataset('xtreme', 'udpos.English', split=\"train\", cache_dir=args.cache_dir, download_mode=\"force_redownload\", verification_mode=\"all_checks\")\r\n```",
"Hi! Apologies for the delayed response! I tried the above and it doesn't solve the issue. Actually, the dataset gets downloaded most times, but sometimes this error occurs (at random afaik). Is it possible that there is a server issue for this particular dataset? I am able to download other datasets using the same code on the same machine with no issues :( I get this error now : \r\n```\r\nDownloading data: 16%|███████████████▌ | 55.9M/355M [04:45<25:25, 196kB/s]\r\nTraceback (most recent call last):\r\n File \"/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py\", line 1107, in <module>\r\n main()\r\n File \"/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py\", line 439, in main\r\n en_dataset = load_dataset(\"xtreme\", \"udpos.English\", split=\"train\", download_mode=\"force_redownload\", verification_mode=\"all_checks\")\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/load.py\", line 1782, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py\", line 872, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py\", line 1649, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py\", line 949, in _download_and_prepare\r\n verify_checksums(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/utils/info_utils.py\", line 62, in verify_checksums\r\n raise NonMatchingChecksumError(\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://lindat.mff.cuni.cz/repository/xmlui/bitstream/handle/11234/1-3105/ud-treebanks-v2.5.tgz']\r\nSet `verification_mode='no_checks'` to skip checksums verification and ignore this error\r\n```",
"If this happens randomly, then this means the data file from the error message is not always downloaded correctly. \r\n\r\nThe only solution in this scenario is to download the dataset again by passing `download_mode=\"force_redownload\"` to the `load_dataset` call."
] |
https://api.github.com/repos/huggingface/datasets/issues/2554 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2554/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2554/comments | https://api.github.com/repos/huggingface/datasets/issues/2554/events | https://github.com/huggingface/datasets/issues/2554 | 931,453,855 | MDU6SXNzdWU5MzE0NTM4NTU= | 2,554 | Multilabel metrics not supported | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2021-06-28T11:09:46Z | 2021-10-13T12:29:13Z | 2021-07-08T08:40:15Z | null | When I try to use a metric like F1 macro I get the following error:
```
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
```
There is an explicit casting here:
https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/src/datasets/features.py#L274
And looks like this is because here
https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/metrics/f1/f1.py#L88
the features can only be integers, so we cannot use that F1 for multilabel. Instead, if I create the following F1 (ints replaced with sequence of ints), it will work:
```python
class F1(datasets.Metric):
def _info(self):
return datasets.MetricInfo(
description=_DESCRIPTION,
citation=_CITATION,
inputs_description=_KWARGS_DESCRIPTION,
features=datasets.Features(
{
"predictions": datasets.Sequence(datasets.Value("int32")),
"references": datasets.Sequence(datasets.Value("int32")),
}
),
reference_urls=["https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html"],
)
def _compute(self, predictions, references, labels=None, pos_label=1, average="binary", sample_weight=None):
return {
"f1": f1_score(
references,
predictions,
labels=labels,
pos_label=pos_label,
average=average,
sample_weight=sample_weight,
),
}
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2554/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2554/timeline | null | completed | null | null | false | [
"Hi @GuillemGSubies, thanks for reporting.\r\n\r\nI have made a PR to fix this issue and allow metrics to be computed also for multilabel classification problems.",
"Looks nice, thank you very much! 🚀 ",
"Sorry for reopening but I just noticed that the `_compute` method for the F1 metric is still not good enough for multilabel problems:\r\n\r\nhttps://github.com/huggingface/datasets/blob/92a3ee549705aa0a107c9fa5caf463b3b3da2616/metrics/f1/f1.py#L115\r\n\r\nSomehow we should be able to change the parameter `average` at least",
"@GuillemGSubies, the parameter `average` passed to `_compute` is then passed to `f1_score`. This is right."
] |
https://api.github.com/repos/huggingface/datasets/issues/3062 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3062/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3062/comments | https://api.github.com/repos/huggingface/datasets/issues/3062/events | https://github.com/huggingface/datasets/pull/3062 | 1,023,209,592 | PR_kwDODunzps4tCxfK | 3,062 | Update summary on PyPi beyond NLP | [] | closed | false | null | 0 | 2021-10-11T23:27:46Z | 2021-10-13T08:55:54Z | 2021-10-13T08:55:54Z | null | More than just NLP now | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3062/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3062/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3062.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3062",
"merged_at": "2021-10-13T08:55:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3062.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3062"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3114/comments | https://api.github.com/repos/huggingface/datasets/issues/3114/events | https://github.com/huggingface/datasets/issues/3114 | 1,030,693,130 | I_kwDODunzps49byEK | 3,114 | load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-10-19T20:01:45Z | 2022-02-14T14:00:28Z | 2022-02-14T14:00:28Z | null | ## Describe the bug
Passing a PyArrowHDFS implementation of fsspec.spec.AbstractFileSystem (in the `fs` param required by `load_from_disk` methods in `DatasetDict` (in datasets_dict.py) and `Dataset` (in arrow_dataset.py) results in an error when calling the download method in the `fs` parameter.
## Steps to reproduce the bug
The documentation for the `fs` parameter states:
```
fs (:class:`~filesystems.S3FileSystem` or ``fsspec.spec.AbstractFileSystem``, optional, default ``None``):
Instance of the remote filesystem used to download the files from.
```
`PyArrowHDFS` from [fsspec](https://filesystem-spec.readthedocs.io/en/latest/_modules/fsspec/implementations/hdfs.html) implements `fsspec.spec.AbstractFileSystem`. However, when using it as shown below, I get an error.
```python
from fsspec.implementations.hdfs import PyArrowHDFS
...
transformed_corpus_path = "/user/my_user/clickbait/transformed_ds/"
fs = PyArrowHDFS(host, port, user, kerb_ticket=kerb_ticket)
dss = DatasetDict.load_from_disk(transformed_corpus_path, fs, True)
```
## Expected results
Previous to load from disk, I have managed to successfully store in HDFS the data and meta-information of a DatasetDict by doing:
```python
transformed_corpus_path = "/user/my_user/clickbait/transformed_ds/"
fs = PyArrowHDFS(host, port, user, kerb_ticket=kerb_ticket)
my_datasets.save_to_disk(transformed_corpus_path, fs=fs)
```
As I have 3 datasets in the DatasetDict named `my_datasets`, the previous Python code creates the following contents in HDFS:
```sh
$ hadoop fs -ls "/user/my_user/clickbait/transformed_ds/"
Found 4 items
-rw------- 3 my_user users 43 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/dataset_dict.json
drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/test
drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/train
drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/validation
```
I would expect to recover on `dss` the Arrow-backed datasets I previously saved in HDFS calling the `save_to_disk` method on the `DatasetDict` object when invoking `DatasetDict.load_from_disk(...)` as described above.
## Actual results
However, when trying to recover the saved datasets, I get this error:
```
...
File "/home/fperez/dev/neuromancer/neuromancer/corpus.py", line 186, in load_transformed_corpus_from_disk
dss = DatasetDict.load_from_disk(transformed_corpus_path, fs, True)
File "/home/fperez/anaconda3/envs/neuromancer/lib/python3.9/site-packages/datasets/dataset_dict.py", line 748, in load_from_disk
dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)
File "/home/fperez/anaconda3/envs/neuromancer/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1048, in load_from_disk
fs.download(src_dataset_path, dataset_path.as_posix(), recursive=True)
File "pyarrow/_hdfsio.pyx", line 438, in pyarrow._hdfsio.HadoopFileSystem.download
TypeError: download() got an unexpected keyword argument 'recursive'
```
Examining the [signature of the download method in pyarrow 5.0.0](https://github.com/apache/arrow/blob/54d2bd89c99df72fa091b025452f85dd5d88e3cf/python/pyarrow/_hdfsio.pyx#L438) we can see that there's no download parameter:
```python
def download(self, path, stream, buffer_size=None):
with self.open(path, 'rb') as f:
f.download(stream, buffer_size=buffer_size)
```
## Environment info
- `datasets` version: 1.13.3
- Platform: Linux-3.10.0-1160.15.2.el7.x86_64-x86_64-with-glibc2.33
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3114/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3114/timeline | null | completed | null | null | false | [
"Hi ! Can you try again with pyarrow 6.0.0 ? I think it includes some changes regarding filesystems compatibility with fsspec.",
"Hi @lhoestq! I ended up using `fsspec.implementations.arrow.HadoopFileSystem` which doesn't have the problem I described with pyarrow 5.0.0.\r\n\r\nI'll try again with `PyArrowHDFS` once I update arrow to 6.0.0.\r\n\r\nThanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/496/comments | https://api.github.com/repos/huggingface/datasets/issues/496/events | https://github.com/huggingface/datasets/pull/496 | 677,016,998 | MDExOlB1bGxSZXF1ZXN0NDY2MjE1Mjg1 | 496 | fix bad type in overflow check | [] | closed | false | null | 0 | 2020-08-11T16:24:58Z | 2020-08-14T13:29:35Z | 2020-08-14T13:29:34Z | null | When writing an arrow file and inferring the features, the overflow check could fail if the first example had a `null` field.
This is because we were not using the inferred features to do this check, and we could end up with arrays that don't match because of a type mismatch (`null` vs `string` for example).
This should fix #482 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/496/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/496/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/496.diff",
"html_url": "https://github.com/huggingface/datasets/pull/496",
"merged_at": "2020-08-14T13:29:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/496.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/496"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3009 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3009/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3009/comments | https://api.github.com/repos/huggingface/datasets/issues/3009/events | https://github.com/huggingface/datasets/pull/3009 | 1,014,868,235 | PR_kwDODunzps4sn_YG | 3,009 | Fix Windows paths in SUPERB benchmark datasets | [] | closed | false | null | 0 | 2021-10-04T08:13:49Z | 2021-10-04T13:43:25Z | 2021-10-04T13:43:25Z | null | Minor fix in SUPERB benchmark datasets for Windows pathname component separator.
Related to #2884, #2783 and #2619. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3009/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3009/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3009.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3009",
"merged_at": "2021-10-04T13:43:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3009.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3009"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4469 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4469/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4469/comments | https://api.github.com/repos/huggingface/datasets/issues/4469/events | https://github.com/huggingface/datasets/pull/4469 | 1,267,213,849 | PR_kwDODunzps45cweQ | 4,469 | Replace data URLs in wider_face dataset once hosted on the Hub | [] | closed | false | null | 1 | 2022-06-10T08:13:25Z | 2022-06-10T16:42:08Z | 2022-06-10T16:32:46Z | null | This PR replaces the URLs of data files in Google Drive with our Hub ones, once the data owners have approved to host their data on the Hub.
They also informed us that their dataset is licensed under CC BY-NC-ND. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4469/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4469/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4469.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4469",
"merged_at": "2022-06-10T16:32:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4469.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4469"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5957 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5957/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5957/comments | https://api.github.com/repos/huggingface/datasets/issues/5957/events | https://github.com/huggingface/datasets/pull/5957 | 1,757,252,466 | PR_kwDODunzps5TA1EB | 5,957 | Release: 2.13.0 | [] | closed | false | null | 4 | 2023-06-14T16:17:26Z | 2023-06-14T16:33:39Z | 2023-06-14T16:24:39Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5957/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5957/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5957.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5957",
"merged_at": "2023-06-14T16:24:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5957.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5957"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006498 / 0.011353 (-0.004855) | 0.003970 / 0.011008 (-0.007038) | 0.099242 / 0.038508 (0.060734) | 0.044363 / 0.023109 (0.021254) | 0.313900 / 0.275898 (0.038002) | 0.386562 / 0.323480 (0.063082) | 0.003837 / 0.007986 (-0.004149) | 0.004203 / 0.004328 (-0.000125) | 0.076191 / 0.004250 (0.071940) | 0.058823 / 0.037052 (0.021771) | 0.333838 / 0.258489 (0.075349) | 0.368235 / 0.293841 (0.074394) | 0.030774 / 0.128546 (-0.097772) | 0.008787 / 0.075646 (-0.066860) | 0.326474 / 0.419271 (-0.092798) | 0.050903 / 0.043533 (0.007370) | 0.303928 / 0.255139 (0.048789) | 0.321532 / 0.283200 (0.038333) | 0.024162 / 0.141683 (-0.117520) | 1.479662 / 1.452155 (0.027507) | 1.520300 / 1.492716 (0.027584) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212403 / 0.018006 (0.194397) | 0.448019 / 0.000490 (0.447529) | 0.005465 / 0.000200 (0.005265) | 0.000388 / 0.000054 (0.000334) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027533 / 0.037411 (-0.009878) | 0.117477 / 0.014526 (0.102952) | 0.121182 / 0.176557 (-0.055374) | 0.181150 / 0.737135 (-0.555985) | 0.128557 / 0.296338 (-0.167782) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397763 / 0.215209 (0.182554) | 3.959460 / 2.077655 (1.881805) | 1.822057 / 1.504120 (0.317937) | 1.627020 / 1.541195 (0.085826) | 1.695394 / 1.468490 (0.226904) | 0.536848 / 4.584777 (-4.047929) | 3.765205 / 3.745712 (0.019493) | 3.196300 / 5.269862 (-2.073561) | 1.623583 / 4.565676 (-2.942094) | 0.065823 / 0.424275 (-0.358452) | 0.011062 / 0.007607 (0.003455) | 0.500428 / 0.226044 (0.274384) | 5.008816 / 2.268929 (2.739888) | 2.314660 / 55.444624 (-53.129965) | 2.007429 / 6.876477 (-4.869047) | 2.141438 / 2.142072 (-0.000635) | 0.656697 / 4.805227 (-4.148530) | 0.143555 / 6.500664 (-6.357109) | 0.063928 / 0.075469 (-0.011541) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.169038 / 1.841788 (-0.672750) | 15.027186 / 8.074308 (6.952878) | 13.571484 / 10.191392 (3.380092) | 0.166437 / 0.680424 (-0.513986) | 0.017656 / 0.534201 (-0.516545) | 0.397725 / 0.579283 (-0.181558) | 0.451019 / 0.434364 (0.016655) | 0.469134 / 0.540337 (-0.071203) | 0.575885 / 1.386936 (-0.811051) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006887 / 0.011353 (-0.004465) | 0.004166 / 0.011008 (-0.006842) | 0.077137 / 0.038508 (0.038629) | 0.055631 / 0.023109 (0.032522) | 0.397658 / 0.275898 (0.121760) | 0.473981 / 0.323480 (0.150502) | 0.005365 / 0.007986 (-0.002621) | 0.003401 / 0.004328 (-0.000928) | 0.076481 / 0.004250 (0.072231) | 0.056014 / 0.037052 (0.018961) | 0.415253 / 0.258489 (0.156764) | 0.457620 / 0.293841 (0.163779) | 0.031850 / 0.128546 (-0.096696) | 0.008869 / 0.075646 (-0.066777) | 0.083475 / 0.419271 (-0.335796) | 0.049232 / 0.043533 (0.005699) | 0.392947 / 0.255139 (0.137808) | 0.417243 / 0.283200 (0.134043) | 0.024554 / 0.141683 (-0.117129) | 1.508081 / 1.452155 (0.055926) | 1.541845 / 1.492716 (0.049129) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228470 / 0.018006 (0.210464) | 0.450933 / 0.000490 (0.450443) | 0.001508 / 0.000200 (0.001308) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030189 / 0.037411 (-0.007222) | 0.118853 / 0.014526 (0.104327) | 0.124809 / 0.176557 (-0.051747) | 0.175066 / 0.737135 (-0.562069) | 0.129819 / 0.296338 (-0.166519) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451830 / 0.215209 (0.236621) | 4.505352 / 2.077655 (2.427698) | 2.309303 / 1.504120 (0.805183) | 2.120983 / 1.541195 (0.579789) | 2.198808 / 1.468490 (0.730317) | 0.543836 / 4.584777 (-4.040940) | 3.836650 / 3.745712 (0.090938) | 1.872293 / 5.269862 (-3.397568) | 1.122335 / 4.565676 (-3.443342) | 0.067463 / 0.424275 (-0.356812) | 0.012143 / 0.007607 (0.004536) | 0.553674 / 0.226044 (0.327630) | 5.572101 / 2.268929 (3.303173) | 2.772151 / 55.444624 (-52.672473) | 2.451557 / 6.876477 (-4.424920) | 2.521241 / 2.142072 (0.379169) | 0.665799 / 4.805227 (-4.139428) | 0.143842 / 6.500664 (-6.356822) | 0.065373 / 0.075469 (-0.010096) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.271013 / 1.841788 (-0.570775) | 15.290054 / 8.074308 (7.215746) | 14.807044 / 10.191392 (4.615652) | 0.163767 / 0.680424 (-0.516657) | 0.017383 / 0.534201 (-0.516818) | 0.393046 / 0.579283 (-0.186237) | 0.423056 / 0.434364 (-0.011308) | 0.459193 / 0.540337 (-0.081145) | 0.559964 / 1.386936 (-0.826972) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006112 / 0.011353 (-0.005241) | 0.003712 / 0.011008 (-0.007297) | 0.099996 / 0.038508 (0.061488) | 0.037526 / 0.023109 (0.014417) | 0.305834 / 0.275898 (0.029936) | 0.361368 / 0.323480 (0.037888) | 0.004849 / 0.007986 (-0.003136) | 0.002912 / 0.004328 (-0.001417) | 0.077729 / 0.004250 (0.073479) | 0.053203 / 0.037052 (0.016151) | 0.318088 / 0.258489 (0.059599) | 0.371745 / 0.293841 (0.077904) | 0.029384 / 0.128546 (-0.099162) | 0.008504 / 0.075646 (-0.067142) | 0.318472 / 0.419271 (-0.100799) | 0.046043 / 0.043533 (0.002510) | 0.310418 / 0.255139 (0.055279) | 0.335044 / 0.283200 (0.051844) | 0.020364 / 0.141683 (-0.121319) | 1.503201 / 1.452155 (0.051047) | 1.556408 / 1.492716 (0.063692) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210245 / 0.018006 (0.192239) | 0.418918 / 0.000490 (0.418428) | 0.002552 / 0.000200 (0.002352) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022295 / 0.037411 (-0.015116) | 0.099534 / 0.014526 (0.085008) | 0.106432 / 0.176557 (-0.070124) | 0.165110 / 0.737135 (-0.572026) | 0.109851 / 0.296338 (-0.186488) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423947 / 0.215209 (0.208738) | 4.232978 / 2.077655 (2.155323) | 2.004849 / 1.504120 (0.500729) | 1.814345 / 1.541195 (0.273151) | 1.809192 / 1.468490 (0.340702) | 0.561146 / 4.584777 (-4.023631) | 3.385043 / 3.745712 (-0.360669) | 1.708265 / 5.269862 (-3.561597) | 1.030290 / 4.565676 (-3.535387) | 0.067095 / 0.424275 (-0.357180) | 0.011052 / 0.007607 (0.003445) | 0.522416 / 0.226044 (0.296371) | 5.207003 / 2.268929 (2.938075) | 2.367067 / 55.444624 (-53.077558) | 1.998705 / 6.876477 (-4.877772) | 2.068633 / 2.142072 (-0.073439) | 0.672396 / 4.805227 (-4.132831) | 0.135818 / 6.500664 (-6.364846) | 0.065229 / 0.075469 (-0.010240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.187079 / 1.841788 (-0.654709) | 13.893153 / 8.074308 (5.818845) | 13.951328 / 10.191392 (3.759936) | 0.142519 / 0.680424 (-0.537905) | 0.016546 / 0.534201 (-0.517655) | 0.364008 / 0.579283 (-0.215275) | 0.385957 / 0.434364 (-0.048407) | 0.425218 / 0.540337 (-0.115120) | 0.519586 / 1.386936 (-0.867350) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005914 / 0.011353 (-0.005439) | 0.003619 / 0.011008 (-0.007389) | 0.077806 / 0.038508 (0.039298) | 0.037254 / 0.023109 (0.014144) | 0.378976 / 0.275898 (0.103078) | 0.433620 / 0.323480 (0.110140) | 0.003291 / 0.007986 (-0.004694) | 0.004523 / 0.004328 (0.000194) | 0.077604 / 0.004250 (0.073353) | 0.047493 / 0.037052 (0.010441) | 0.396027 / 0.258489 (0.137538) | 0.453345 / 0.293841 (0.159504) | 0.028170 / 0.128546 (-0.100376) | 0.008431 / 0.075646 (-0.067215) | 0.083985 / 0.419271 (-0.335286) | 0.045149 / 0.043533 (0.001617) | 0.369364 / 0.255139 (0.114225) | 0.407191 / 0.283200 (0.123991) | 0.024033 / 0.141683 (-0.117649) | 1.516838 / 1.452155 (0.064683) | 1.564260 / 1.492716 (0.071544) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200848 / 0.018006 (0.182842) | 0.407818 / 0.000490 (0.407328) | 0.003971 / 0.000200 (0.003771) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025033 / 0.037411 (-0.012378) | 0.103585 / 0.014526 (0.089059) | 0.108741 / 0.176557 (-0.067816) | 0.161061 / 0.737135 (-0.576075) | 0.112763 / 0.296338 (-0.183576) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.479913 / 0.215209 (0.264704) | 4.801904 / 2.077655 (2.724249) | 2.511433 / 1.504120 (1.007313) | 2.307523 / 1.541195 (0.766328) | 2.338343 / 1.468490 (0.869853) | 0.557731 / 4.584777 (-4.027046) | 3.386261 / 3.745712 (-0.359451) | 2.999978 / 5.269862 (-2.269883) | 1.463058 / 4.565676 (-3.102619) | 0.067645 / 0.424275 (-0.356630) | 0.011224 / 0.007607 (0.003617) | 0.596854 / 0.226044 (0.370810) | 5.940946 / 2.268929 (3.672017) | 2.980194 / 55.444624 (-52.464430) | 2.634961 / 6.876477 (-4.241516) | 2.648160 / 2.142072 (0.506088) | 0.669728 / 4.805227 (-4.135499) | 0.135536 / 6.500664 (-6.365128) | 0.066865 / 0.075469 (-0.008604) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.287151 / 1.841788 (-0.554637) | 14.491681 / 8.074308 (6.417373) | 14.185752 / 10.191392 (3.994360) | 0.129391 / 0.680424 (-0.551032) | 0.016650 / 0.534201 (-0.517551) | 0.380111 / 0.579283 (-0.199172) | 0.392877 / 0.434364 (-0.041487) | 0.439402 / 0.540337 (-0.100935) | 0.530865 / 1.386936 (-0.856071) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011446 / 0.011353 (0.000093) | 0.006623 / 0.011008 (-0.004386) | 0.131915 / 0.038508 (0.093407) | 0.047364 / 0.023109 (0.024255) | 0.369203 / 0.275898 (0.093305) | 0.451509 / 0.323480 (0.128029) | 0.006265 / 0.007986 (-0.001720) | 0.004072 / 0.004328 (-0.000257) | 0.098626 / 0.004250 (0.094375) | 0.079523 / 0.037052 (0.042470) | 0.406038 / 0.258489 (0.147549) | 0.450564 / 0.293841 (0.156723) | 0.050793 / 0.128546 (-0.077753) | 0.014667 / 0.075646 (-0.060979) | 0.401359 / 0.419271 (-0.017913) | 0.072299 / 0.043533 (0.028767) | 0.404456 / 0.255139 (0.149317) | 0.396223 / 0.283200 (0.113023) | 0.037048 / 0.141683 (-0.104635) | 1.869123 / 1.452155 (0.416968) | 1.953621 / 1.492716 (0.460905) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237246 / 0.018006 (0.219240) | 0.533207 / 0.000490 (0.532717) | 0.007392 / 0.000200 (0.007192) | 0.000117 / 0.000054 (0.000062) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029458 / 0.037411 (-0.007954) | 0.112438 / 0.014526 (0.097912) | 0.139115 / 0.176557 (-0.037441) | 0.215225 / 0.737135 (-0.521911) | 0.134440 / 0.296338 (-0.161898) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.616783 / 0.215209 (0.401574) | 6.113925 / 2.077655 (4.036270) | 2.403465 / 1.504120 (0.899345) | 1.967523 / 1.541195 (0.426329) | 2.042144 / 1.468490 (0.573654) | 0.927447 / 4.584777 (-3.657330) | 5.280413 / 3.745712 (1.534701) | 2.715335 / 5.269862 (-2.554527) | 1.755640 / 4.565676 (-2.810036) | 0.114370 / 0.424275 (-0.309905) | 0.013583 / 0.007607 (0.005976) | 0.761701 / 0.226044 (0.535657) | 7.466049 / 2.268929 (5.197120) | 3.041943 / 55.444624 (-52.402682) | 2.314477 / 6.876477 (-4.562000) | 2.469285 / 2.142072 (0.327213) | 1.216055 / 4.805227 (-3.589172) | 0.214205 / 6.500664 (-6.286459) | 0.080901 / 0.075469 (0.005432) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.565185 / 1.841788 (-0.276603) | 18.387986 / 8.074308 (10.313678) | 19.665109 / 10.191392 (9.473717) | 0.226670 / 0.680424 (-0.453754) | 0.028430 / 0.534201 (-0.505771) | 0.510526 / 0.579283 (-0.068757) | 0.623178 / 0.434364 (0.188814) | 0.592039 / 0.540337 (0.051702) | 0.728462 / 1.386936 (-0.658474) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009161 / 0.011353 (-0.002192) | 0.004891 / 0.011008 (-0.006117) | 0.106502 / 0.038508 (0.067994) | 0.048234 / 0.023109 (0.025125) | 0.451173 / 0.275898 (0.175275) | 0.557948 / 0.323480 (0.234468) | 0.005350 / 0.007986 (-0.002635) | 0.004559 / 0.004328 (0.000230) | 0.110393 / 0.004250 (0.106142) | 0.060624 / 0.037052 (0.023572) | 0.459265 / 0.258489 (0.200776) | 0.575302 / 0.293841 (0.281461) | 0.051379 / 0.128546 (-0.077167) | 0.015576 / 0.075646 (-0.060070) | 0.116650 / 0.419271 (-0.302621) | 0.065534 / 0.043533 (0.022001) | 0.461431 / 0.255139 (0.206292) | 0.487677 / 0.283200 (0.204477) | 0.037773 / 0.141683 (-0.103910) | 1.992416 / 1.452155 (0.540261) | 1.991280 / 1.492716 (0.498564) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233607 / 0.018006 (0.215601) | 0.507539 / 0.000490 (0.507049) | 0.001307 / 0.000200 (0.001107) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032897 / 0.037411 (-0.004514) | 0.126549 / 0.014526 (0.112023) | 0.137893 / 0.176557 (-0.038663) | 0.192124 / 0.737135 (-0.545012) | 0.147300 / 0.296338 (-0.149038) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.679371 / 0.215209 (0.464162) | 6.673249 / 2.077655 (4.595595) | 2.979141 / 1.504120 (1.475022) | 2.568789 / 1.541195 (1.027594) | 2.537540 / 1.468490 (1.069050) | 0.973555 / 4.584777 (-3.611222) | 5.313536 / 3.745712 (1.567824) | 2.693283 / 5.269862 (-2.576579) | 1.819483 / 4.565676 (-2.746194) | 0.111644 / 0.424275 (-0.312631) | 0.013218 / 0.007607 (0.005611) | 0.776114 / 0.226044 (0.550070) | 7.758907 / 2.268929 (5.489978) | 3.417611 / 55.444624 (-52.027013) | 2.859502 / 6.876477 (-4.016975) | 2.927726 / 2.142072 (0.785653) | 1.163671 / 4.805227 (-3.641556) | 0.228636 / 6.500664 (-6.272028) | 0.082077 / 0.075469 (0.006607) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.746150 / 1.841788 (-0.095637) | 17.961955 / 8.074308 (9.887647) | 21.590545 / 10.191392 (11.399153) | 0.210017 / 0.680424 (-0.470406) | 0.028435 / 0.534201 (-0.505766) | 0.509253 / 0.579283 (-0.070030) | 0.606993 / 0.434364 (0.172629) | 0.587189 / 0.540337 (0.046851) | 0.684023 / 1.386936 (-0.702913) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1869/comments | https://api.github.com/repos/huggingface/datasets/issues/1869/events | https://github.com/huggingface/datasets/pull/1869 | 807,159,835 | MDExOlB1bGxSZXF1ZXN0NTcyNDU0NTMy | 1,869 | Remove outdated commands in favor of huggingface-cli | [] | closed | false | null | 0 | 2021-02-12T11:28:10Z | 2021-02-12T16:13:09Z | 2021-02-12T16:13:08Z | null | Removing the old user commands since `huggingface_hub` is going to be used instead.
cc @julien-c | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1869/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1869/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1869.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1869",
"merged_at": "2021-02-12T16:13:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1869.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1869"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2166/comments | https://api.github.com/repos/huggingface/datasets/issues/2166/events | https://github.com/huggingface/datasets/issues/2166 | 849,778,545 | MDU6SXNzdWU4NDk3Nzg1NDU= | 2,166 | Regarding Test Sets for the GEM datasets | [
{
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets",
"id": 2067401494,
"name": "Dataset discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion"
}
] | closed | false | null | 2 | 2021-04-04T02:02:45Z | 2021-04-06T08:13:12Z | 2021-04-06T08:13:12Z | null | @yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)?
e.g.
```
from datasets import load_dataset
DATASET_NAME="common_gen"
data = load_dataset("gem", DATASET_NAME)
```
The test set doesn't have the target or references.
```
data['test'][0]
{'concept_set_id': 0, 'concepts': ['drill', 'field', 'run', 'team'], 'gem_id': 'common_gen-test-0', 'gem_parent_id': 'common_gen-test-0', 'references': [], 'target': ''}
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2166/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2166/timeline | null | completed | null | null | false | [
"Hi @vyraun ! The test references for CommonGen are not publicly available: you can reach out to the original dataset authors if you would like to ask for them, but we will not be releasing them as part of GEM (March 31st was the release date for the test set inputs, references are incidentally released for some of the test sets but shouldn't really be used for benchmark submissions)\r\n\r\ncc @sebastiangehrmann",
"Oh okay, thanks @yjernite ! "
] |
https://api.github.com/repos/huggingface/datasets/issues/996 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/996/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/996/comments | https://api.github.com/repos/huggingface/datasets/issues/996/events | https://github.com/huggingface/datasets/issues/996 | 755,176,084 | MDU6SXNzdWU3NTUxNzYwODQ= | 996 | NotADirectoryError while loading the CNN/Dailymail dataset | [] | closed | false | null | 12 | 2020-12-02T11:07:56Z | 2022-02-17T14:13:39Z | 2022-02-17T14:13:39Z | null |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/996/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/996/timeline | null | completed | null | null | false | [
"Looks like the google drive download failed.\r\nI'm getting a `Google Drive - Quota exceeded` error while looking at the downloaded file.\r\n\r\nWe should consider finding a better host than google drive for this dataset imo\r\nrelated : #873 #864 ",
"It is working now, thank you. \r\n\r\nShould I leave this issue open to address the Quota-exceeded error?",
"Yes please. It's been happening several times, we definitely need to address it",
"Any updates on this one? I'm facing a similar issue trying to add CelebA.",
"I've looked into it and couldn't find a solution. This looks like a Google Drive limitation..\r\nPlease try to use other hosts when possible",
"The original links are google drive links. Would it be feasible for HF to maintain their own servers for this? Also, I think the same issue must also exist with TFDS.",
"It's possible to host data on our side but we should ask the authors. TFDS has the same issue and doesn't have a solution either afaik.\r\nOtherwise you can use the google drive link, but it it's not that convenient because of this quota issue.",
"Okay. I imagine asking every author who shares their dataset on Google Drive will also be cumbersome.",
"I am getting this error as well. Is there a fix?",
"Not as long as the data is stored on GG drive unfortunately.\r\nMaybe we can ask if there's a mirror ?\r\n\r\nHi @JafferWilson is there a download link to get cnn dailymail from another host than GG drive ?\r\n\r\nTo give you some context, this library provides tools to download and process datasets. For CNN DailyMail the data are downloaded from the link you provide on your github repository. Unfortunately because of GG drive quotas, many users are not able to load this dataset.",
"The following copy of CNN/DM dataset, fixed the problem for me:\r\nhttps://huggingface.co/datasets/ccdv/cnn_dailymail",
"Thanks for the link @mrazizi !\r\n\r\nApparently the original authors don't host the dataset themselves (\"for legal reasons\", source [here](https://github.com/abisee/cnn-dailymail/issues/9))."
] |
https://api.github.com/repos/huggingface/datasets/issues/4425 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4425/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4425/comments | https://api.github.com/repos/huggingface/datasets/issues/4425/events | https://github.com/huggingface/datasets/pull/4425 | 1,253,641,604 | PR_kwDODunzps44uuDq | 4,425 | Make extensions case-insensitive in timit_asr dataset | [] | closed | false | null | 1 | 2022-05-31T10:10:04Z | 2022-06-01T14:15:30Z | 2022-06-01T14:06:51Z | null | Related to #4422. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4425/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4425/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4425.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4425",
"merged_at": "2022-06-01T14:06:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4425.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4425"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2413 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2413/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2413/comments | https://api.github.com/repos/huggingface/datasets/issues/2413/events | https://github.com/huggingface/datasets/issues/2413 | 903,777,557 | MDU6SXNzdWU5MDM3Nzc1NTc= | 2,413 | AttributeError: 'DatasetInfo' object has no attribute 'task_templates' | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-05-27T13:44:28Z | 2021-06-01T01:05:47Z | 2021-06-01T01:05:47Z | null | ## Describe the bug
Hello,
I'm trying to add dataset and contribute, but test keep fail with below cli.
` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<my_dataset>`
## Steps to reproduce the bug
It seems like a bug when I see an error with the existing dataset, not the dataset I'm trying to add.
` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<any_dataset>`
## Expected results
All test passed
## Actual results
```
# check that dataset is not empty
self.parent.assertListEqual(sorted(dataset_builder.info.splits.keys()), sorted(dataset))
for split in dataset_builder.info.splits.keys():
# check that loaded datset is not empty
self.parent.assertTrue(len(dataset[split]) > 0)
# check that we can cast features for each task template
> task_templates = dataset_builder.info.task_templates
E AttributeError: 'DatasetInfo' object has no attribute 'task_templates'
tests/test_dataset_common.py:175: AttributeError
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2
- Platform: Darwin-20.4.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2413/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2413/timeline | null | completed | null | null | false | [
"Hi ! Can you try using a more up-to-date version ? We added the task_templates in `datasets` 1.7.0.\r\n\r\nIdeally when you're working on new datasets, you should install and use the local version of your fork of `datasets`. Here I think you tried to run the 1.7.0 tests with the 1.6.2 code"
] |
https://api.github.com/repos/huggingface/datasets/issues/1836 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1836/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1836/comments | https://api.github.com/repos/huggingface/datasets/issues/1836/events | https://github.com/huggingface/datasets/issues/1836 | 803,531,837 | MDU6SXNzdWU4MDM1MzE4Mzc= | 1,836 | test.json has been removed from the limit dataset repo (breaks dataset) | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 1 | 2021-02-08T12:45:53Z | 2021-02-10T16:14:58Z | 2021-02-10T16:14:58Z | null | https://github.com/huggingface/datasets/blob/16042b233dbff2a7585110134e969204c69322c3/datasets/limit/limit.py#L51
The URL is not valid anymore since test.json has been removed in master for some reason. Directly referencing the last commit works:
`https://raw.githubusercontent.com/ilmgut/limit_dataset/0707d3989cd8848f0f11527c77dcf168fefd2b23/data` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1836/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1836/timeline | null | completed | null | null | false | [
"Thanks for the heads up ! I'm opening a PR to fix that"
] |
https://api.github.com/repos/huggingface/datasets/issues/5219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5219/comments | https://api.github.com/repos/huggingface/datasets/issues/5219/events | https://github.com/huggingface/datasets/issues/5219 | 1,441,255,910 | I_kwDODunzps5V59Hm | 5,219 | Delta Tables usage using Datasets Library | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 4 | 2022-11-09T02:43:56Z | 2023-03-02T19:29:12Z | null | null | ### Feature request
Adding compatibility of Datasets library with Delta Format. Elevating the utilities of Datasets library from Machine Learning Scope to Data Engineering Scope as well.
### Motivation
We know datasets library can absorb csv, json, parquet, etc. file formats but it would be great if Datasets library could work with Delta Tables (with delta format) as it has different features such as time travelling, layout optimization, query performance, aids in Data Engineering.
This will help and enhance Datasets library from Machine Learning utility to Data Engineering utilities and expand horizons thereafter. I am totally using Datasets library in all my usecases and as my role expands so does the work, compatibility with Datasets library is something I don't want to lose.
### Your contribution
Would love to work on this feature, even if this has to picked up from scratch, including design paradigms and patterns.
I have basic idea about Delta Live Tables, would brush it easily for this feature. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5219/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5219/timeline | null | null | null | null | false | [
"Hi ! Interesting :) Can you provide concrete examples of cases where it can be useful ?",
"Few example blogs and posts that might help on this - \r\n\r\n1. https://hevodata.com/learn/databricks-delta-tables/\r\n2. https://docs.databricks.com/delta/index.html\r\n\r\nBasically, we are looking at utility of Datasets library with Delta Lake Tables.\r\n",
"`datasets` can already read/write from parquet from/to a cloud storage using fsspec, if I understand correctly it's should be possible to load parquet files as delat lake tables no ? :) Or is there someting missing ?",
"@lhoestq Per my understanding, delta lake table is a bunch of paruqet files together with the meta to support ACID. For example file 1 contains v0.1 of record A while file 2 contains v0.2 of record A. I am assuming the Hugging face dataset would delegate the read/write delta table to 3rd party lib, maybe pyarrow. Correct me if I was wrong @reichenbch \r\n\r\nAnd I am assuming, people are asking the versioning of Hugging face datasets. But I am assuming Hugging face delegate this function to github and it is not the key requirement for Public Data set. It actually the key function of ML Ops, I am not sure whether hugging face would like expand to that area."
] |
https://api.github.com/repos/huggingface/datasets/issues/5212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5212/comments | https://api.github.com/repos/huggingface/datasets/issues/5212/events | https://github.com/huggingface/datasets/pull/5212 | 1,439,642,483 | PR_kwDODunzps5CZPI2 | 5,212 | Fix CI require_beam maximum compatible dill version | [] | closed | false | null | 1 | 2022-11-08T07:30:01Z | 2022-11-15T06:32:27Z | 2022-11-15T06:32:26Z | null | A previous commit to main branch introduced an additional requirement on maximum compatible `dill` version with `apache-beam` in our CI `require_beam`:
- d7c942228b8dcf4de64b00a3053dce59b335f618
- ec222b220b79f10c8d7b015769f0999b15959feb
This PR fixes the maximum compatible `dill` version with `apache-beam`, which is <0.3.2 (and not 0.3.6): https://github.com/apache/beam/blob/v2.42.0/sdks/python/setup.py#L219 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5212/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5212/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5212.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5212",
"merged_at": "2022-11-15T06:32:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5212.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5212"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5212). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/736 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/736/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/736/comments | https://api.github.com/repos/huggingface/datasets/issues/736/events | https://github.com/huggingface/datasets/pull/736 | 722,348,191 | MDExOlB1bGxSZXF1ZXN0NTA0MTE0MjMy | 736 | Start community-provided dataset docs | [] | closed | false | null | 5 | 2020-10-15T13:41:39Z | 2020-10-23T13:15:28Z | 2020-10-23T13:15:28Z | null | This is one I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs.
+ In slack @thomwolf called it a `user-namespace` dataset, but the docs call it `community dataset`.
I think the first naming is clearer, but I didn't address that here.
+ I didn't add metadata, will try that. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/736/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/736/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/736.diff",
"html_url": "https://github.com/huggingface/datasets/pull/736",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/736.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/736"
} | true | [
"can you also reference the `--organization` flag like in https://github.com/huggingface/transformers/blob/master/docs/source/model_sharing.rst#upload-your-model-with-the-cli ?",
"done!",
"Not sure if the changes in `datasets/wmt_t2t/wmt_utils.py` are intentional.\r\nIf you want to add more configs to wmt, could you do it in a serapate PR ?",
"I don't think I changed wmt_utils (I think github is wrong or my setup is poorly configured).\r\n\r\nLocally git diff master --name-only says one file. Master is up to date.\r\nTried to make a new PR #755 and the same thing happened.",
"Trying new fork."
] |
https://api.github.com/repos/huggingface/datasets/issues/3394 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3394/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3394/comments | https://api.github.com/repos/huggingface/datasets/issues/3394/events | https://github.com/huggingface/datasets/issues/3394 | 1,073,396,308 | I_kwDODunzps4_-rpU | 3,394 | Preserve all feature types when saving a dataset on the Hub with `push_to_hub` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-12-07T14:08:30Z | 2021-12-21T17:00:09Z | 2021-12-21T17:00:09Z | null | Currently, if one of the dataset features is of type `ClassLabel`, saving the dataset with `push_to_hub` and reloading the dataset with `load_dataset` will return the feature of type `Value`. To fix this, we should do something similar to `save_to_disk` (which correctly preserves the types) and not only push the parquet files in `push_to_hub`, but also the dataset `info` (stored in a JSON file). | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3394/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3394/timeline | null | completed | null | null | false | [
"According to this [comment in the forum](https://discuss.huggingface.co/t/save-datasetdict-to-huggingface-hub/12075/8?u=lhoestq), using `push_to_hub` on a dataset with `ClassLabel` can also make the feature simply disappear when it's reloaded !",
"Maybe we can also fix https://github.com/huggingface/datasets/issues/3035 while working on this because, as pointed out in my initial post, `save_to_disk` also saves the `dataset_info.json` file."
] |
https://api.github.com/repos/huggingface/datasets/issues/6024 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6024/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6024/comments | https://api.github.com/repos/huggingface/datasets/issues/6024/events | https://github.com/huggingface/datasets/pull/6024 | 1,801,708,808 | PR_kwDODunzps5VWbGe | 6,024 | Don't reference self in Spark._validate_cache_dir | [] | closed | false | null | 4 | 2023-07-12T20:31:16Z | 2023-07-13T16:58:32Z | 2023-07-13T12:37:09Z | null | Fix for https://github.com/huggingface/datasets/issues/5963 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6024/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6024/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6024.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6024",
"merged_at": "2023-07-13T12:37:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6024.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6024"
} | true | [
"Ptal @lhoestq :) I tested this manually on a multi-node Databricks cluster",
"Hm looks like the check_code_quality failures are unrelated to me change... https://github.com/huggingface/datasets/actions/runs/5536162850/jobs/10103451883?pr=6024",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005952 / 0.011353 (-0.005400) | 0.003585 / 0.011008 (-0.007424) | 0.079163 / 0.038508 (0.040655) | 0.057926 / 0.023109 (0.034817) | 0.326647 / 0.275898 (0.050749) | 0.383485 / 0.323480 (0.060005) | 0.004530 / 0.007986 (-0.003456) | 0.002821 / 0.004328 (-0.001508) | 0.062071 / 0.004250 (0.057820) | 0.048023 / 0.037052 (0.010971) | 0.329368 / 0.258489 (0.070879) | 0.390877 / 0.293841 (0.097036) | 0.026959 / 0.128546 (-0.101588) | 0.007911 / 0.075646 (-0.067735) | 0.259956 / 0.419271 (-0.159315) | 0.044582 / 0.043533 (0.001049) | 0.320537 / 0.255139 (0.065398) | 0.373814 / 0.283200 (0.090614) | 0.020275 / 0.141683 (-0.121408) | 1.532128 / 1.452155 (0.079973) | 1.595031 / 1.492716 (0.102315) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.186127 / 0.018006 (0.168120) | 0.428586 / 0.000490 (0.428097) | 0.005180 / 0.000200 (0.004980) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024876 / 0.037411 (-0.012536) | 0.072169 / 0.014526 (0.057643) | 0.082015 / 0.176557 (-0.094542) | 0.147467 / 0.737135 (-0.589668) | 0.082769 / 0.296338 (-0.213570) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.410625 / 0.215209 (0.195416) | 4.116742 / 2.077655 (2.039088) | 2.172291 / 1.504120 (0.668171) | 2.022462 / 1.541195 (0.481268) | 2.048142 / 1.468490 (0.579651) | 0.503152 / 4.584777 (-4.081625) | 3.019135 / 3.745712 (-0.726577) | 3.589451 / 5.269862 (-1.680410) | 2.206876 / 4.565676 (-2.358801) | 0.057687 / 0.424275 (-0.366588) | 0.006560 / 0.007607 (-0.001047) | 0.475585 / 0.226044 (0.249541) | 4.784344 / 2.268929 (2.515416) | 2.506322 / 55.444624 (-52.938302) | 2.168251 / 6.876477 (-4.708225) | 2.324453 / 2.142072 (0.182381) | 0.590609 / 4.805227 (-4.214618) | 0.124178 / 6.500664 (-6.376486) | 0.059197 / 0.075469 (-0.016272) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212359 / 1.841788 (-0.629429) | 17.915843 / 8.074308 (9.841535) | 13.128330 / 10.191392 (2.936938) | 0.144805 / 0.680424 (-0.535618) | 0.016889 / 0.534201 (-0.517312) | 0.344056 / 0.579283 (-0.235227) | 0.359370 / 0.434364 (-0.074994) | 0.404199 / 0.540337 (-0.136138) | 0.549117 / 1.386936 (-0.837819) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005914 / 0.011353 (-0.005439) | 0.003565 / 0.011008 (-0.007443) | 0.061575 / 0.038508 (0.023067) | 0.057677 / 0.023109 (0.034568) | 0.359753 / 0.275898 (0.083855) | 0.394135 / 0.323480 (0.070655) | 0.004648 / 0.007986 (-0.003338) | 0.002795 / 0.004328 (-0.001534) | 0.061877 / 0.004250 (0.057626) | 0.049673 / 0.037052 (0.012621) | 0.363120 / 0.258489 (0.104631) | 0.402685 / 0.293841 (0.108844) | 0.027021 / 0.128546 (-0.101525) | 0.008006 / 0.075646 (-0.067641) | 0.067398 / 0.419271 (-0.351874) | 0.044442 / 0.043533 (0.000909) | 0.364851 / 0.255139 (0.109712) | 0.387219 / 0.283200 (0.104019) | 0.027267 / 0.141683 (-0.114416) | 1.466675 / 1.452155 (0.014520) | 1.512607 / 1.492716 (0.019891) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206156 / 0.018006 (0.188150) | 0.410877 / 0.000490 (0.410387) | 0.003061 / 0.000200 (0.002861) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024869 / 0.037411 (-0.012542) | 0.075736 / 0.014526 (0.061210) | 0.083922 / 0.176557 (-0.092634) | 0.139510 / 0.737135 (-0.597626) | 0.087685 / 0.296338 (-0.208654) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414473 / 0.215209 (0.199264) | 4.150633 / 2.077655 (2.072979) | 2.132892 / 1.504120 (0.628773) | 1.964072 / 1.541195 (0.422878) | 2.003353 / 1.468490 (0.534863) | 0.498012 / 4.584777 (-4.086765) | 3.010135 / 3.745712 (-0.735577) | 2.841130 / 5.269862 (-2.428732) | 1.826013 / 4.565676 (-2.739664) | 0.057443 / 0.424275 (-0.366832) | 0.006374 / 0.007607 (-0.001234) | 0.490337 / 0.226044 (0.264292) | 4.889628 / 2.268929 (2.620700) | 2.575626 / 55.444624 (-52.868998) | 2.246522 / 6.876477 (-4.629955) | 2.276183 / 2.142072 (0.134110) | 0.581465 / 4.805227 (-4.223763) | 0.123877 / 6.500664 (-6.376787) | 0.060339 / 0.075469 (-0.015130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.333202 / 1.841788 (-0.508585) | 18.363558 / 8.074308 (10.289250) | 14.109356 / 10.191392 (3.917964) | 0.147358 / 0.680424 (-0.533066) | 0.016813 / 0.534201 (-0.517388) | 0.334815 / 0.579283 (-0.244468) | 0.366576 / 0.434364 (-0.067788) | 0.397223 / 0.540337 (-0.143115) | 0.547893 / 1.386936 (-0.839043) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2043 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2043/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2043/comments | https://api.github.com/repos/huggingface/datasets/issues/2043/events | https://github.com/huggingface/datasets/pull/2043 | 830,279,098 | MDExOlB1bGxSZXF1ZXN0NTkxODE1ODAz | 2,043 | Support pickle protocol for dataset splits defined as ReadInstruction | [] | closed | false | null | 2 | 2021-03-12T16:35:11Z | 2021-03-16T14:25:38Z | 2021-03-16T14:05:05Z | null | Fixes #2022 (+ some style fixes) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2043/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2043/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2043.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2043",
"merged_at": "2021-03-16T14:05:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2043.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2043"
} | true | [
"@lhoestq But we don't perform conversion to a `NamedSplit` if `_split` is not a string which means it **will** be a `ReadInstruction` after reloading.",
"Yes right ! I read it wrong.\r\nPerfect then"
] |
https://api.github.com/repos/huggingface/datasets/issues/100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/100/comments | https://api.github.com/repos/huggingface/datasets/issues/100/events | https://github.com/huggingface/datasets/pull/100 | 618,081,602 | MDExOlB1bGxSZXF1ZXN0NDE3ODc1MjE2 | 100 | Add per type scores in seqeval metric | [] | closed | false | null | 4 | 2020-05-14T09:37:52Z | 2020-05-14T23:21:35Z | 2020-05-14T23:21:34Z | null | This PR add a bit more detail in the seqeval metric. Now the usage and output are:
```python
import nlp
met = nlp.load_metric('metrics/seqeval')
references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
met.compute(predictions, references)
#Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8}
```
It is also possible to compute scores for non IOB notations, POS tagging for example hasn't this kind of notation. Add `suffix` parameter:
```python
import nlp
met = nlp.load_metric('metrics/seqeval')
references = [['O', 'O', 'O', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']]
predictions = [['O', 'O', 'MISC', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']]
met.compute(predictions, references, metrics_kwargs={"suffix": True})
#Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.9}
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/100/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/100/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/100.diff",
"html_url": "https://github.com/huggingface/datasets/pull/100",
"merged_at": "2020-05-14T23:21:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/100.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/100"
} | true | [
"LGTM :-) Some small suggestions to shorten the code a bit :-) ",
"Can you put the kwargs as normal kwargs instead of a dict? (And add them to the kwargs description As well)",
"@thom Is-it what you meant?",
"Yes and there is a dynamically generated doc string in the metric script KWARGS DESCRIPTION"
] |
https://api.github.com/repos/huggingface/datasets/issues/1727 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1727/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1727/comments | https://api.github.com/repos/huggingface/datasets/issues/1727/events | https://github.com/huggingface/datasets/issues/1727 | 784,435,131 | MDU6SXNzdWU3ODQ0MzUxMzE= | 1,727 | BLEURT score calculation raises UnrecognizedFlagError | [] | closed | false | null | 10 | 2021-01-12T17:27:02Z | 2022-06-01T16:06:02Z | 2022-06-01T16:06:02Z | null | Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`.
My environment:
```
python==3.8.5
datasets==1.2.0
tensorflow==2.3.1
cudatoolkit==11.0.221
```
Test code for reproducing the error:
```
from datasets import load_metric
bleurt = load_metric('bleurt')
gen_text = "I am walking on the promenade today"
ref_text = "I am walking along the promenade on this sunny day"
bleurt.compute(predictions=[test_text], references=[test_text])
```
Error Output:
```
Using default BLEURT-Base checkpoint for sequence maximum length 128. You can use a bigger model for better results with e.g.: datasets.load_metric('bleurt', 'bleurt-large-512').
INFO:tensorflow:Reading checkpoint /home/ubuntu/.cache/huggingface/metrics/bleurt/default/downloads/extracted/9aee35580225730ac5422599f35c4986e4c49cafd08082123342b1019720dac4/bleurt-base-128.
INFO:tensorflow:Config file found, reading.
INFO:tensorflow:Will load checkpoint bert_custom
INFO:tensorflow:Performs basic checks...
INFO:tensorflow:... name:bert_custom
INFO:tensorflow:... vocab_file:vocab.txt
INFO:tensorflow:... bert_config_file:bert_config.json
INFO:tensorflow:... do_lower_case:True
INFO:tensorflow:... max_seq_length:128
INFO:tensorflow:Creating BLEURT scorer.
INFO:tensorflow:Loading model...
INFO:tensorflow:BLEURT initialized.
---------------------------------------------------------------------------
UnrecognizedFlagError Traceback (most recent call last)
<ipython-input-12-8b3f4322318a> in <module>
2 gen_text = "I am walking on the promenade today"
3 ref_text = "I am walking along the promenade on this sunny day"
----> 4 bleurt.compute(predictions=[gen_text], references=[ref_text])
~/anaconda3/envs/noved/lib/python3.8/site-packages/datasets/metric.py in compute(self, *args, **kwargs)
396 references = self.data["references"]
397 with temp_seed(self.seed):
--> 398 output = self._compute(predictions=predictions, references=references, **kwargs)
399
400 if self.buf_writer is not None:
~/.cache/huggingface/modules/datasets_modules/metrics/bleurt/b1de33e1cbbcb1dbe276c887efa1fad68c6aff913885108078fa1ad408908778/bleurt.py in _compute(self, predictions, references)
103
104 def _compute(self, predictions, references):
--> 105 scores = self.scorer.score(references=references, candidates=predictions)
106 return {"scores": scores}
~/anaconda3/envs/noved/lib/python3.8/site-packages/bleurt/score.py in score(self, references, candidates, batch_size)
164 """
165 if not batch_size:
--> 166 batch_size = FLAGS.bleurt_batch_size
167
168 candidates, references = list(candidates), list(references)
~/anaconda3/envs/noved/lib/python3.8/site-packages/tensorflow/python/platform/flags.py in __getattr__(self, name)
83 # a flag.
84 if not wrapped.is_parsed():
---> 85 wrapped(_sys.argv)
86 return wrapped.__getattr__(name)
87
~/anaconda3/envs/noved/lib/python3.8/site-packages/absl/flags/_flagvalues.py in __call__(self, argv, known_only)
643 for name, value in unknown_flags:
644 suggestions = _helpers.get_flag_suggestions(name, list(self))
--> 645 raise _exceptions.UnrecognizedFlagError(
646 name, value, suggestions=suggestions)
647
UnrecognizedFlagError: Unknown command line flag 'f'
```
Possible Fix:
Modify `_compute` method https://github.com/huggingface/datasets/blob/7e64851a12263dc74d41c668167918484c8000ab/metrics/bleurt/bleurt.py#L104
to receive a `batch_size` argument, for example:
```
def _compute(self, predictions, references, batch_size=1):
scores = self.scorer.score(references=references, candidates=predictions, batch_size=batch_size)
return {"scores": scores}
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1727/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1727/timeline | null | completed | null | null | false | [
"Upgrading tensorflow to version 2.4.0 solved the issue.",
"I still have the same error even with TF 2.4.0.",
"And I have the same error with TF 2.4.1. I believe this issue should be reopened. Any ideas?!",
"I'm seeing the same issue with TF 2.4.1 when running the following in https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb:\r\n```\r\n!pip install git+https://github.com/google-research/bleurt.git\r\nreferences = [\"foo bar baz\", \"one two three\"]\r\nbleurt_metric = load_metric('bleurt')\r\npredictions = [\"foo bar\", \"four five six\"]\r\nbleurt_metric.compute(predictions=predictions, references=references)\r\n```",
"@aleSuglia @oscartackstrom - Are you getting the error when running your code in a Jupyter notebook ?\r\n\r\nI tried reproducing this error again, and was unable to do so from the python command line console in a virtual environment similar to the one I originally used (and unfortunately no longer have access to) when I first got the error. \r\nHowever, I've managed to reproduce the error by running the same code in a Jupyter notebook running a kernel from the same virtual environment.\r\nThis made me suspect that the problem is somehow related to the Jupyter notebook.\r\n\r\nMore environment details:\r\n```\r\nOS: Ubuntu Linux 18.04\r\nconda==4.8.3\r\npython==3.8.5\r\ndatasets==1.3.0\r\ntensorflow==2.4.0\r\nBLEURT==0.0.1\r\nnotebook==6.2.0\r\n```",
"This happens when running the notebook on colab. The issue seems to be that colab populates sys.argv with arguments not handled by bleurt.\r\n\r\nRunning this before calling bleurt fixes it:\r\n```\r\nimport sys\r\nsys.argv = sys.argv[:1]\r\n```\r\n\r\nNot the most elegant solution. Perhaps it needs to be fixed in the bleurt code itself rather than huggingface?\r\n\r\nThis is the output of `print(sys.argv)` when running on colab:\r\n```\r\n['/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py', '-f', '/root/.local/share/jupyter/runtime/kernel-a857a78c-44d6-4b9d-b18a-030b858ee327.json']\r\n```",
"I got the error when running it from the command line. It looks more like an error that should be fixed in the BLEURT codebase.",
"Seems to be a known issue in the bleurt codebase: https://github.com/google-research/bleurt/issues/24.",
"Hi, the problem should be solved now.",
"Hi @tsellam! I can verify that the issue is indeed fixed now. Thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3720/comments | https://api.github.com/repos/huggingface/datasets/issues/3720/events | https://github.com/huggingface/datasets/issues/3720 | 1,137,537,080 | I_kwDODunzps5DzXA4 | 3,720 | Builder Configuration Update Required on Common Voice Dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 7 | 2022-02-14T16:21:41Z | 2022-02-15T14:31:27Z | null | null | Missing language in Common Voice dataset
**Link:** https://huggingface.co/datasets/common_voice
I tried to call the Urdu dataset using `load_dataset("common_voice", "ur", split="train+validation")` but couldn't due to builder configuration not found. I checked the source file here for the languages support:
https://github.com/huggingface/datasets/blob/master/datasets/common_voice/common_voice.py
and Urdu isn't included there. I assume a quick update will fix the issue as Urdu speech is now available at the Common Voice dataset.
Am I the one who added this dataset? No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3720/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3720/timeline | null | null | null | null | false | [
"Hi @aasem, thanks for reporting.\r\n\r\nPlease note that currently Commom Voice is hosted on our Hub as a community dataset by the Mozilla Foundation. See all Common Voice versions here: https://huggingface.co/mozilla-foundation\r\n\r\nMaybe we should add an explaining note in our \"legacy\" Common Voice canonical script? What do you think @lhoestq @mariosasko ?",
"Thank you, @albertvillanova, for the quick response. I am not sure about the exact flow but I guess adding the following lines under the `_Languages` dictionary definition in [common_voice.py](https://github.com/huggingface/datasets/blob/master/datasets/common_voice/common_voice.py) might resolve the issue. I guess the dataset is recently made available so the file needs updating.\r\n\r\n```\r\n\"ur\": {\r\n \"Language\": \"Urdu\",\r\n \"Date\": \"2022-01-19\",\r\n \"Size\": \"68 MB\",\r\n \"Version\": \"ur_3h_2022-01-19\",\r\n \"Validated_Hr_Total\": 1,\r\n \"Overall_Hr_Total\": 3,\r\n \"Number_Of_Voice\": 48,\r\n },\r\n```\r\n",
"@aasem for compliance reasons, we are no longer updating the `common_voice.py` script.\r\n\r\nWe agreed with Mozilla Foundation to use their community datasets instead, which will ask you to accept their terms of use:\r\n```\r\nYou need to share your contact information to access this dataset.\r\n\r\nThis repository is publicly accessible, but you have to register to access its content — don't worry, it's just one click!\r\n\r\nBy clicking on “Access repository” below, you accept that your contact information (email address and username) can be shared with the repository authors. This will let the authors get in touch for instance if some parts of the repository's contents need to be taken down for licensing reasons.\r\n\r\nBy clicking on “Access repository” below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset.\r\n\r\nYou will immediately be granted access to the contents of the dataset. \r\n```\r\n\r\nIn order to use e.g. their Common Voice dataset version 8.0, please:\r\n- First visit their dataset page: https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0\r\n- Accept their term of use by clicking \"Access repository\"\r\n- You can then load their dataset with:\r\n ```python\r\n load_dataset(\"mozilla-foundation/common_voice_8_0\", \"ur\", split=\"train+validation\")\r\n ```",
"@albertvillanova \r\n>Maybe we should add an explaining note in our \"legacy\" Common Voice canonical script?\r\n\r\nYes, I agree we should have a deprecation notice in the canonical script to redirect users to the new script.",
"@albertvillanova, \r\nI now get the following error after downloading my access token from the huggingface and passing it to `load_dataset` call:\r\n\r\n`AttributeError: 'DownloadManager' object has no attribute 'download_config'`\r\n\r\nAny quick pointer on how it might be resolved?",
"@aasem What version of `datasets` are you using? We renamed that attribute from `_download_config` to `download_conig` fairly recently, so updating to the newest version should resolve the issue:\r\n```\r\npip install -U datasets\r\n```",
"Thanks a lot, @mariosasko. That completely resolved the issue. "
] |
https://api.github.com/repos/huggingface/datasets/issues/2436 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2436/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2436/comments | https://api.github.com/repos/huggingface/datasets/issues/2436/events | https://github.com/huggingface/datasets/pull/2436 | 908,100,211 | MDExOlB1bGxSZXF1ZXN0NjU4ODQzMzQy | 2,436 | Update DatasetMetadata and ReadMe | [] | closed | false | null | 0 | 2021-06-01T09:32:37Z | 2021-06-14T13:23:27Z | 2021-06-14T13:23:26Z | null | This PR contains the changes discussed in #2395.
**Edit**:
In addition to those changes, I'll be updating the `ReadMe` as follows:
Currently, `Section` has separate parsing and validation error lists. In `.validate()`, we add these lists to the final lists and throw errors.
One way to make `ReadMe` consistent with `DatasetMetadata` and add a separate `.validate()` method is to throw separate parsing and validation errors.
This way, we don't have to throw validation errors, but only parsing errors in `__init__ ()`. We can have an option in `__init__()` to suppress parsing errors so that an object is created for validation. Doing this will allow the user to get all the errors in one go.
In `test_dataset_cards` , we are already catching error messages and appending to a list. This can be done for `ReadMe()` for parsing errors, and `ReadMe(...,suppress_errors=True); readme.validate()` for validation, separately.
**Edit 2**:
The only parsing issue we have as of now is multiple headings at the same level with the same name. I assume this will happen very rarely, but it is still better to throw an error than silently pick one of them. It should be okay to separate it this way.
Wdyt @lhoestq ?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2436/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2436/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2436.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2436",
"merged_at": "2021-06-14T13:23:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2436.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2436"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1422/comments | https://api.github.com/repos/huggingface/datasets/issues/1422/events | https://github.com/huggingface/datasets/issues/1422 | 760,707,113 | MDU6SXNzdWU3NjA3MDcxMTM= | 1,422 | Can't map dataset (loaded from csv) | [] | closed | false | null | 2 | 2020-12-09T22:05:42Z | 2020-12-17T18:13:40Z | 2020-12-17T18:13:40Z | null | Hello! I am trying to load single csv file with two columns: ('label': str, 'text' str), where is label is str of two possible classes.
Below steps are similar with [this notebook](https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing), where bert model and tokenizer are used to classify lmdb loaded dataset. Only one difference it is the dataset loaded from .csv file.
Here is how I load it:
```python
data_path = 'data.csv'
data = pd.read_csv(data_path)
# process class name to indices
classes = ['neg', 'pos']
class_to_idx = { cl: i for i, cl in enumerate(classes) }
# now data is like {'label': int, 'text' str}
data['label'] = data['label'].apply(lambda x: class_to_idx[x])
# load dataset and map it with defined `tokenize` function
features = Features({
target: ClassLabel(num_classes=2, names=['neg', 'pos'], names_file=None, id=None),
feature: Value(dtype='string', id=None),
})
dataset = Dataset.from_pandas(data, features=features)
dataset.map(tokenize, batched=True, batch_size=len(dataset))
```
It ruins on the last line with following error:
```
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-112-32b6275ce418> in <module>()
9 })
10 dataset = Dataset.from_pandas(data, features=features)
---> 11 dataset.map(tokenizer, batched=True, batch_size=len(dataset))
2 frames
/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1237 test_inputs = self[:2] if batched else self[0]
1238 test_indices = [0, 1] if batched else 0
-> 1239 update_data = does_function_return_dict(test_inputs, test_indices)
1240 logger.info("Testing finished, running the mapping function on the dataset")
1241
/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices)
1208 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns]
1209 processed_inputs = (
-> 1210 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1211 )
1212 does_return_dict = isinstance(processed_inputs, Mapping)
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2281 )
2282 ), (
-> 2283 "text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) "
2284 "or `List[List[str]]` (batch of pretokenized examples)."
2285 )
AssertionError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).
```
which I think is not expected. I also tried the same steps using `Dataset.from_csv` which resulted in the same error.
For reproducing this, I used [this dataset from kaggle](https://www.kaggle.com/team-ai/spam-text-message-classification) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1422/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1422/timeline | null | completed | null | null | false | [
"Please could you post the whole script? I can't reproduce your issue. After updating the feature names/labels to match with the data, everything works fine for me. Try to update datasets/transformers to the newest version.",
"Actually, the problem was how `tokenize` function was defined. This was completely my side mistake, so there are really no needs in this issue anymore"
] |
https://api.github.com/repos/huggingface/datasets/issues/4940 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4940/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4940/comments | https://api.github.com/repos/huggingface/datasets/issues/4940/events | https://github.com/huggingface/datasets/pull/4940 | 1,363,513,058 | PR_kwDODunzps4-c6WY | 4,940 | Fix multilinguality tag and missing sections in xquad_r dataset card | [] | closed | false | null | 1 | 2022-09-06T16:05:35Z | 2022-09-12T10:11:07Z | 2022-09-12T10:08:48Z | null | This PR fixes issue reported on the Hub:
- Label as multilingual: https://huggingface.co/datasets/xquad_r/discussions/1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4940/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4940/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4940.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4940",
"merged_at": "2022-09-12T10:08:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4940.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4940"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2688/comments | https://api.github.com/repos/huggingface/datasets/issues/2688/events | https://github.com/huggingface/datasets/issues/2688 | 949,182,074 | MDU6SXNzdWU5NDkxODIwNzQ= | 2,688 | hebrew language codes he and iw should be treated as aliases | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-07-20T23:13:52Z | 2021-07-21T16:34:53Z | 2021-07-21T16:34:53Z | null | https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2688/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2688/timeline | null | completed | null | null | false | [
"Hi @eyaler, thanks for reporting.\r\n\r\nWhile you are true with respect the Hebrew language tag (\"iw\" is deprecated and \"he\" is the preferred value), in the \"mc4\" dataset (which is a derived dataset) we have kept the language tags present in the original dataset: [Google C4](https://www.tensorflow.org/datasets/catalog/c4).",
"For discoverability on the website I updated the YAML tags at the top of the mC4 dataset card https://github.com/huggingface/datasets/commit/38288087b1b02f97586e0346e8f28f4960f1fd37\r\n\r\nOnce the website is updated, mC4 will be listed in https://huggingface.co/datasets?filter=languages:he\r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1408/comments | https://api.github.com/repos/huggingface/datasets/issues/1408/events | https://github.com/huggingface/datasets/pull/1408 | 760,590,589 | MDExOlB1bGxSZXF1ZXN0NTM1Mzk3MTAw | 1,408 | adding fake-news-english | [] | closed | false | null | 1 | 2020-12-09T19:02:07Z | 2020-12-13T00:49:19Z | 2020-12-13T00:49:19Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1408/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1408/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1408.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1408",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1408.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1408"
} | true | [
"also don't forget to format your code using `make style` to fix the CI"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/4777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4777/comments | https://api.github.com/repos/huggingface/datasets/issues/4777/events | https://github.com/huggingface/datasets/pull/4777 | 1,324,548,784 | PR_kwDODunzps48cByL | 4,777 | Require torchaudio<0.12.0 to avoid RuntimeError | [] | closed | false | null | 1 | 2022-08-01T14:50:50Z | 2022-08-02T17:35:14Z | 2022-08-02T17:21:39Z | null | Related to:
- https://github.com/huggingface/transformers/issues/18379
Fix partially #4776. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4777/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4777/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4777.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4777",
"merged_at": "2022-08-02T17:21:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4777.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4777"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1899 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1899/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1899/comments | https://api.github.com/repos/huggingface/datasets/issues/1899/events | https://github.com/huggingface/datasets/pull/1899 | 810,308,332 | MDExOlB1bGxSZXF1ZXN0NTc1MDIxMjc4 | 1,899 | Fix: ALT - fix duplicated examples in alt-parallel | [] | closed | false | null | 0 | 2021-02-17T15:53:56Z | 2021-02-17T17:20:49Z | 2021-02-17T17:20:49Z | null | As noticed in #1898 by @10-zin the examples of the `alt-paralel` configurations have all the same values for the `translation` field.
This was due to a bad copy of a python dict.
This PR fixes that. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1899/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1899/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1899.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1899",
"merged_at": "2021-02-17T17:20:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1899.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1899"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/753/comments | https://api.github.com/repos/huggingface/datasets/issues/753/events | https://github.com/huggingface/datasets/pull/753 | 727,434,935 | MDExOlB1bGxSZXF1ZXN0NTA4MzI4ODM0 | 753 | Fix doc links to viewer | [] | closed | false | null | 0 | 2020-10-22T14:20:16Z | 2020-10-23T08:42:11Z | 2020-10-23T08:42:11Z | null | It seems #733 forgot some links in the doc :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/753/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/753/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/753.diff",
"html_url": "https://github.com/huggingface/datasets/pull/753",
"merged_at": "2020-10-23T08:42:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/753.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/753"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2517 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2517/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2517/comments | https://api.github.com/repos/huggingface/datasets/issues/2517/events | https://github.com/huggingface/datasets/pull/2517 | 924,643,345 | MDExOlB1bGxSZXF1ZXN0NjczMjUwODk1 | 2,517 | Fix typo in MatthewsCorrelation class name | [] | closed | false | null | 0 | 2021-06-18T07:53:06Z | 2021-06-18T08:43:55Z | 2021-06-18T08:43:55Z | null | Close #2513. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2517/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2517/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2517.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2517",
"merged_at": "2021-06-18T08:43:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2517.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2517"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2478 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2478/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2478/comments | https://api.github.com/repos/huggingface/datasets/issues/2478/events | https://github.com/huggingface/datasets/issues/2478 | 918,507,510 | MDU6SXNzdWU5MTg1MDc1MTA= | 2,478 | Create release script | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 1 | 2021-06-11T09:38:02Z | 2023-07-20T13:22:23Z | null | null | Create a script so that releases can be done automatically (as done in `transformers`). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2478/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2478/timeline | null | null | null | null | false | [
"I've aligned the release script with Transformers in #6004, so I think this issue can be closed."
] |
https://api.github.com/repos/huggingface/datasets/issues/308 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/308/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/308/comments | https://api.github.com/repos/huggingface/datasets/issues/308/events | https://github.com/huggingface/datasets/pull/308 | 644,195,251 | MDExOlB1bGxSZXF1ZXN0NDM4ODYyMzYy | 308 | Specify utf-8 encoding for MRPC files | [] | closed | false | null | 0 | 2020-06-23T22:44:36Z | 2020-06-25T12:52:21Z | 2020-06-25T12:16:10Z | null | Fixes #307, again probably a Windows-related issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/308/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/308/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/308.diff",
"html_url": "https://github.com/huggingface/datasets/pull/308",
"merged_at": "2020-06-25T12:16:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/308.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/308"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/104/comments | https://api.github.com/repos/huggingface/datasets/issues/104/events | https://github.com/huggingface/datasets/pull/104 | 618,277,081 | MDExOlB1bGxSZXF1ZXN0NDE4MDMzOTY0 | 104 | Add trivia_q | [] | closed | false | null | 0 | 2020-05-14T14:27:19Z | 2020-07-12T05:34:20Z | 2020-05-14T20:23:32Z | null | Currently tested only for one config to pass tests. Needs to add more dummy data later. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/104/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/104/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/104.diff",
"html_url": "https://github.com/huggingface/datasets/pull/104",
"merged_at": "2020-05-14T20:23:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/104.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/104"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3276 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3276/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3276/comments | https://api.github.com/repos/huggingface/datasets/issues/3276/events | https://github.com/huggingface/datasets/pull/3276 | 1,053,793,063 | PR_kwDODunzps4uihih | 3,276 | Update KILT metadata JSON | [] | closed | false | null | 0 | 2021-11-15T15:25:25Z | 2021-11-16T11:21:59Z | 2021-11-16T11:21:58Z | null | Fix #3265. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3276/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3276/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3276.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3276",
"merged_at": "2021-11-16T11:21:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3276.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3276"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/766 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/766/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/766/comments | https://api.github.com/repos/huggingface/datasets/issues/766/events | https://github.com/huggingface/datasets/issues/766 | 730,669,596 | MDU6SXNzdWU3MzA2Njk1OTY= | 766 | [GEM] add DART data-to-text generation dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 2 | 2020-10-27T17:34:04Z | 2020-12-03T13:37:18Z | 2020-12-03T13:37:18Z | null | ## Adding a Dataset
- **Name:** DART
- **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set.
- **Paper:** https://arxiv.org/abs/2007.02871v1
- **Data:** https://github.com/Yale-LILY/dart
- **Motivation:** the dataset will likely be included in the GEM benchmark
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/766/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/766/timeline | null | completed | null | null | false | [
"Is this a duplicate of #924 ?",
"Yup, closing! Haven't been keeping track of the solved issues during the sprint."
] |
https://api.github.com/repos/huggingface/datasets/issues/521 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/521/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/521/comments | https://api.github.com/repos/huggingface/datasets/issues/521/events | https://github.com/huggingface/datasets/pull/521 | 682,477,648 | MDExOlB1bGxSZXF1ZXN0NDcwNzEyNzgz | 521 | Fix dictionnary (dictionary) typo | [] | closed | false | null | 1 | 2020-08-20T07:09:02Z | 2020-08-20T07:52:04Z | 2020-08-20T07:52:04Z | null | This error happens many times I'm thinking maybe its spelled like this on purpose? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/521/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/521/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/521.diff",
"html_url": "https://github.com/huggingface/datasets/pull/521",
"merged_at": "2020-08-20T07:52:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/521.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/521"
} | true | [
"Hahah thanks Yonatan. It was not on purpose, we are just not very good at spelling :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/2677 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2677/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2677/comments | https://api.github.com/repos/huggingface/datasets/issues/2677/events | https://github.com/huggingface/datasets/issues/2677 | 948,429,788 | MDU6SXNzdWU5NDg0Mjk3ODg= | 2,677 | Error when downloading C4 | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2021-07-20T08:37:30Z | 2021-07-20T14:41:31Z | 2021-07-20T14:38:10Z | null | Hi,
I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive:
`datasets.load_dataset('c4', 'en')`
Is this a bug or do I have some configurations missing on my server?
Thanks!
<img width="1014" alt="Снимок экрана 2021-07-20 в 11 37 17" src="https://user-images.githubusercontent.com/36672861/126289448-6e0db402-5f3f-485a-bf74-eb6e0271fc25.png"> | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2677/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2677/timeline | null | completed | null | null | false | [
"Hi Thanks for reporting !\r\nIt looks like these files are not correctly reported in the list of expected files to download, let me fix that ;)",
"Alright this is fixed now. We'll do a new release soon to make the fix available.\r\n\r\nIn the meantime feel free to simply pass `ignore_verifications=True` to `load_dataset` to skip this error",
"@lhoestq thank you for such a quick feedback!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4937/comments | https://api.github.com/repos/huggingface/datasets/issues/4937/events | https://github.com/huggingface/datasets/pull/4937 | 1,363,426,946 | PR_kwDODunzps4-cn6W | 4,937 | Remove deprecated identical_ok | [] | closed | false | null | 1 | 2022-09-06T15:01:24Z | 2022-09-06T22:24:09Z | 2022-09-06T22:21:57Z | null | `huggingface-hub` says that the `identical_ok` argument of `HfApi.upload_file` is now deprecated, and will be removed soon. It even has no effect at the moment when it's passed:
```python
Args:
...
identical_ok (`bool`, *optional*, defaults to `True`):
Deprecated: will be removed in 0.11.0.
Changing this value has no effect.
...
```
There was only one occurence of `identical_ok=False` but it's maybe not worth adding a check ti verify if the files were the same.
cc @mariosasko | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4937/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4937/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4937.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4937",
"merged_at": "2022-09-06T22:21:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4937.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4937"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4948/comments | https://api.github.com/repos/huggingface/datasets/issues/4948/events | https://github.com/huggingface/datasets/pull/4948 | 1,364,973,778 | PR_kwDODunzps4-hwsl | 4,948 | Fix minor typo in error message for missing imports | [] | closed | false | null | 1 | 2022-09-07T17:20:51Z | 2022-09-08T14:59:31Z | 2022-09-08T14:57:15Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4948/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4948/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4948.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4948",
"merged_at": "2022-09-08T14:57:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4948.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4948"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4623 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4623/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4623/comments | https://api.github.com/repos/huggingface/datasets/issues/4623/events | https://github.com/huggingface/datasets/issues/4623 | 1,293,042,894 | I_kwDODunzps5NEkTO | 4,623 | Loading MNIST as Pytorch Dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 4 | 2022-07-04T11:33:10Z | 2022-07-04T14:40:50Z | null | null | ## Describe the bug
Conversion of MNIST dataset to pytorch fails with bug
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("mnist", split="train")
dataset.set_format('torch')
dataset[0]
print()
```
## Expected results
Expect to see torch tensors image and label
## Actual results
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2020.3.3\plugins\python\helpers\pydev\pydevd.py", line 1491, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2020.3.3\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/chapm/PycharmProjects/multiviewdata/multiviewdata/huggingface/mnist.py", line 13, in <module>
dataset[0]
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\arrow_dataset.py", line 2154, in __getitem__
return self._getitem(
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\arrow_dataset.py", line 2139, in _getitem
formatted_output = format_table(
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\formatting.py", line 532, in format_table
return formatter(pa_table, query_type=query_type)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\formatting.py", line 281, in __call__
return self.format_row(pa_table)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 58, in format_row
return self.recursive_tensorize(row)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 54, in recursive_tensorize
return map_nested(self._recursive_tensorize, data_struct, map_list=False)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 356, in map_nested
mapped = [
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 357, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 309, in _single_map_nested
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 309, in <dictcomp>
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 293, in _single_map_nested
return function(data_struct)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 51, in _recursive_tensorize
return self._tensorize(data_struct)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 38, in _tensorize
if np.issubdtype(value.dtype, np.integer):
AttributeError: 'bytes' object has no attribute 'dtype'
python-BaseException
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Windows-10-10.0.22579-SP0
- Python version: 3.9.2
- PyArrow version: 8.0.0
- Pandas version: 1.4.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4623/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4623/timeline | null | null | null | null | false | [
"Hi ! We haven't implemented the conversion from images data to PyTorch tensors yet I think\r\n\r\ncc @mariosasko ",
"So I understand:\r\n\r\nset_format() does not properly do the conversion to pytorch tensors from PIL images.\r\n\r\nSo that someone who stumbles on this can use the package:\r\n\r\n```python\r\ndataset = load_dataset(\"mnist\", split=\"train\")\r\ndef transform_func(examples):\r\n examples[\"image\"] = [np.array(img) for img in examples[\"image\"]]\r\n return examples\r\ndataset = dataset.with_transform(transform_func)\r\ndataset[0]\r\n``` ",
"This then appears to work with pytorch dataloaders as:\r\n```\r\ndataloader=torch.utils.data.DataLoader(dataset,batch_size=1)\r\n```\r\n\r\nand tensorflow as:\r\n```\r\ndataset=dataset.to_tf_dataset(batch_size=1)\r\n```",
"Hi! `set_transform`/`with_transform` is indeed the correct solution for the conversion. Improving this part of the API is one of the things I'm working on currently, so stay tuned!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2964/comments | https://api.github.com/repos/huggingface/datasets/issues/2964/events | https://github.com/huggingface/datasets/issues/2964 | 1,006,605,904 | I_kwDODunzps47_5ZQ | 2,964 | Error when calculating Matthews Correlation Coefficient loaded with `load_metric` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-09-24T15:55:21Z | 2021-09-25T08:06:07Z | 2021-09-25T08:06:07Z | null | ## Describe the bug
After loading the metric named "[Matthews Correlation Coefficient](https://huggingface.co/metrics/matthews_correlation)" from `🤗datasets`, the `.compute` method fails with the following exception `AttributeError: 'float' object has no attribute 'item'` (complete stack trace can be provided if required).
## Steps to reproduce the bug
```python
import torch
predictions = torch.ones((10,))
references = torch.zeros((10,))
from datasets import load_metric
METRIC = load_metric("matthews_correlation")
result = METRIC.compute(predictions=predictions, references=references)
```
## Expected results
We should expect a Python `dict` as it follows:
```
{
"matthews_correlation": float()
}
```
as defined in https://github.com/huggingface/datasets/blob/master/metrics/matthews_correlation/matthews_correlation.py, so the fix will imply removing `.item()`, since the value returned by the `scikit-learn` function is not a `torch.Tensor` but a `float`, which means that the `.item()` will fail.
## Actual results
```
Traceback (most recent call last):
File "/home/alvaro.bartolome/XXX/xxx/cli.py", line 59, in main
app()
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/typer/main.py", line 214, in __call__
return get_command(self)(*args, **kwargs)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1137, in __call__
return self.main(*args, **kwargs)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1062, in main
rv = self.invoke(ctx)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1668, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/typer/main.py", line 500, in wrapper
return callback(**use_params) # type: ignore
File "/home/alvaro.bartolome/XXX/xxx/cli.py", line 43, in train
metrics = trainer.evaluate()
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/transformers/trainer.py", line 2051, in evaluate
output = eval_loop(
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/transformers/trainer.py", line 2292, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "/home/alvaro.bartolome/XXX/xxx/metrics.py", line 20, in compute_metrics
res = METRIC.compute(predictions=predictions, references=eval_preds.label_ids)
File "/home/alvaro.bartolome/miniconda3/envs/lang/lib/python3.9/site-packages/datasets/metric.py", line 402, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/home/alvaro.bartolome/.cache/huggingface/modules/datasets_modules/metrics/matthews_correlation/0275f1e9a4d318e3ea8cdd87547ee0d58d894966616052e3d18444ac8ddd2357/matthews_correlation.py", line 88, in _compute
"matthews_correlation": matthews_corrcoef(references, predictions, sample_weight=sample_weight).item(),
AttributeError: 'float' object has no attribute 'item'
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-4.15.0-1113-azure-x86_64-with-glibc2.23
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2964/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2964/timeline | null | completed | null | null | false | [
"After some more tests I've realized that this \"issue\" is due to the `numpy.float64` to `float` conversion, but when defining a function named `compute_metrics` as it follows:\r\n\r\n```python\r\ndef compute_metrics(eval_preds):\r\n metric = load_metric(\"matthews_correlation\")\r\n logits, labels = eval_preds\r\n predictions = np.argmax(logits, axis=1)\r\n return metric.compute(predictions=predictions, references=labels)\r\n```\r\n\r\nIt fails when the evaluation metrics are computed in the `Trainer` with the same error code `AttributeError: 'float' object has no attribute 'item'` as the output is not a `numpy.float64`... Maybe I'm doing something wrong, not sure!",
"Ok after some more experiments I've realized that it's an issue from my side, at first I thought it was due to `fp16=True` in `TrainingArguments`, but in the end that may not be the issue, so I'll close this for now and check later, since the mistake is on my side :weary: Sorry for the inconvenience!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4720/comments | https://api.github.com/repos/huggingface/datasets/issues/4720/events | https://github.com/huggingface/datasets/issues/4720 | 1,309,980,195 | I_kwDODunzps5OFLYj | 4,720 | Dataset Viewer issue for shamikbose89/lancaster_newsbooks | [] | closed | false | null | 4 | 2022-07-19T20:00:07Z | 2022-09-08T16:47:21Z | 2022-09-08T16:47:21Z | null | ### Link
https://huggingface.co/datasets/shamikbose89/lancaster_newsbooks
### Description
Status code: 400
Exception: ValueError
Message: Cannot seek streaming HTTP file
I am able to use the dataset loading script locally and it also runs when I'm using the one from the hub, but the viewer still doesn't load
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4720/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4720/timeline | null | completed | null | null | false | [
"It seems like the list of splits could not be obtained:\r\n\r\n```python\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names(\"shamikbose89/lancaster_newsbooks\", \"default\")\r\nUsing custom data configuration default\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 354, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/shamikbose89--lancaster_newsbooks/2d1c63d269bf7b9342accce0a95960b1710ab4bc774248878bd80eb96c1afaf7/lancaster_newsbooks.py\", line 73, in _split_generators\r\n data_dir = dl_manager.download_and_extract(_URL)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 916, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 879, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 348, in map_nested\r\n return function(data_struct)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 884, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 388, in _get_extraction_protocol\r\n return _get_extraction_protocol_with_magic_number(f)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 354, in _get_extraction_protocol_with_magic_number\r\n f.seek(0)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 684, in seek\r\n raise ValueError(\"Cannot seek streaming HTTP file\")\r\nValueError: Cannot seek streaming HTTP file\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 404, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 359, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\nping @huggingface/datasets ",
"Oh, I removed the 'split' key from `kwargs`. I put it back in, but there's still the same error",
"It looks like the data host doesn't support http range requests, which is necessary to glob inside a ZIP archive in streaming mode. Can you try hosting the dataset elsewhere ? Or download each file separately from https://ota.bodleian.ox.ac.uk/repository/xmlui/handle/20.500.12024/2531 ?",
"@lhoestq Thanks! That seems to have solved it. I can get the splits with the `get_dataset_split_names()` function. The dataset viewer is still not loading properly, though. The new error is\r\n```\r\nStatus code: 400\r\nException: BadZipFile\r\nMessage: File is not a zip file\r\n```\r\n\r\nPS. The dataset loads properly and can be accessed"
] |
https://api.github.com/repos/huggingface/datasets/issues/5420 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5420/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5420/comments | https://api.github.com/repos/huggingface/datasets/issues/5420/events | https://github.com/huggingface/datasets/pull/5420 | 1,532,265,742 | PR_kwDODunzps5HVAhL | 5,420 | ci: 🎡 remove two obsolete issue templates | [] | closed | false | null | 3 | 2023-01-13T12:58:43Z | 2023-01-13T13:36:00Z | 2023-01-13T13:29:01Z | null | add-dataset is not needed anymore since the "canonical" datasets are on the Hub. And dataset-viewer is managed within the datasets-server project.
See https://github.com/huggingface/datasets/issues/new/choose
<img width="1245" alt="Capture d’écran 2023-01-13 à 13 59 58" src="https://user-images.githubusercontent.com/1676121/212325813-2d4c30e2-343e-4aa2-8cce-b2b77f45628e.png">
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5420/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5420/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5420.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5420",
"merged_at": "2023-01-13T13:29:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5420.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5420"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008450 / 0.011353 (-0.002902) | 0.004478 / 0.011008 (-0.006530) | 0.100440 / 0.038508 (0.061931) | 0.029568 / 0.023109 (0.006459) | 0.296705 / 0.275898 (0.020807) | 0.354565 / 0.323480 (0.031085) | 0.006887 / 0.007986 (-0.001098) | 0.003415 / 0.004328 (-0.000914) | 0.078876 / 0.004250 (0.074626) | 0.034927 / 0.037052 (-0.002125) | 0.307695 / 0.258489 (0.049206) | 0.340917 / 0.293841 (0.047076) | 0.033630 / 0.128546 (-0.094916) | 0.011626 / 0.075646 (-0.064020) | 0.322644 / 0.419271 (-0.096627) | 0.040254 / 0.043533 (-0.003279) | 0.297419 / 0.255139 (0.042280) | 0.321584 / 0.283200 (0.038384) | 0.086202 / 0.141683 (-0.055481) | 1.465579 / 1.452155 (0.013425) | 1.521456 / 1.492716 (0.028740) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200890 / 0.018006 (0.182884) | 0.410300 / 0.000490 (0.409811) | 0.001647 / 0.000200 (0.001447) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022569 / 0.037411 (-0.014843) | 0.096062 / 0.014526 (0.081536) | 0.102474 / 0.176557 (-0.074082) | 0.138596 / 0.737135 (-0.598539) | 0.106262 / 0.296338 (-0.190077) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415976 / 0.215209 (0.200766) | 4.144322 / 2.077655 (2.066667) | 1.871783 / 1.504120 (0.367663) | 1.669478 / 1.541195 (0.128283) | 1.718214 / 1.468490 (0.249724) | 0.687870 / 4.584777 (-3.896907) | 3.362084 / 3.745712 (-0.383628) | 1.844127 / 5.269862 (-3.425735) | 1.149611 / 4.565676 (-3.416066) | 0.081410 / 0.424275 (-0.342865) | 0.012278 / 0.007607 (0.004671) | 0.518245 / 0.226044 (0.292200) | 5.185164 / 2.268929 (2.916236) | 2.299029 / 55.444624 (-53.145595) | 1.960021 / 6.876477 (-4.916456) | 2.009751 / 2.142072 (-0.132322) | 0.803759 / 4.805227 (-4.001468) | 0.147340 / 6.500664 (-6.353324) | 0.063896 / 0.075469 (-0.011573) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254142 / 1.841788 (-0.587646) | 13.799683 / 8.074308 (5.725375) | 13.940387 / 10.191392 (3.748995) | 0.151246 / 0.680424 (-0.529178) | 0.028709 / 0.534201 (-0.505491) | 0.391600 / 0.579283 (-0.187683) | 0.405750 / 0.434364 (-0.028614) | 0.455479 / 0.540337 (-0.084858) | 0.541022 / 1.386936 (-0.845914) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006462 / 0.011353 (-0.004891) | 0.004462 / 0.011008 (-0.006547) | 0.096588 / 0.038508 (0.058080) | 0.026931 / 0.023109 (0.003822) | 0.344595 / 0.275898 (0.068697) | 0.378743 / 0.323480 (0.055264) | 0.005672 / 0.007986 (-0.002314) | 0.003345 / 0.004328 (-0.000984) | 0.074363 / 0.004250 (0.070112) | 0.037300 / 0.037052 (0.000248) | 0.346895 / 0.258489 (0.088406) | 0.388585 / 0.293841 (0.094744) | 0.031459 / 0.128546 (-0.097088) | 0.011522 / 0.075646 (-0.064124) | 0.318507 / 0.419271 (-0.100764) | 0.041145 / 0.043533 (-0.002388) | 0.343866 / 0.255139 (0.088727) | 0.366490 / 0.283200 (0.083291) | 0.086793 / 0.141683 (-0.054890) | 1.483859 / 1.452155 (0.031704) | 1.574006 / 1.492716 (0.081290) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220436 / 0.018006 (0.202430) | 0.402988 / 0.000490 (0.402498) | 0.000435 / 0.000200 (0.000235) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024573 / 0.037411 (-0.012838) | 0.099190 / 0.014526 (0.084664) | 0.106796 / 0.176557 (-0.069761) | 0.142387 / 0.737135 (-0.594748) | 0.109991 / 0.296338 (-0.186347) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.473452 / 0.215209 (0.258243) | 4.749554 / 2.077655 (2.671899) | 2.433482 / 1.504120 (0.929362) | 2.224276 / 1.541195 (0.683082) | 2.261579 / 1.468490 (0.793088) | 0.699876 / 4.584777 (-3.884901) | 3.378366 / 3.745712 (-0.367346) | 1.835062 / 5.269862 (-3.434799) | 1.161249 / 4.565676 (-3.404427) | 0.082967 / 0.424275 (-0.341308) | 0.012745 / 0.007607 (0.005138) | 0.580006 / 0.226044 (0.353962) | 5.789868 / 2.268929 (3.520939) | 2.909496 / 55.444624 (-52.535128) | 2.539196 / 6.876477 (-4.337280) | 2.617737 / 2.142072 (0.475665) | 0.810320 / 4.805227 (-3.994907) | 0.152501 / 6.500664 (-6.348163) | 0.067201 / 0.075469 (-0.008268) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257844 / 1.841788 (-0.583943) | 13.865295 / 8.074308 (5.790987) | 14.169073 / 10.191392 (3.977680) | 0.135655 / 0.680424 (-0.544769) | 0.016597 / 0.534201 (-0.517604) | 0.374915 / 0.579283 (-0.204368) | 0.382771 / 0.434364 (-0.051593) | 0.431934 / 0.540337 (-0.108403) | 0.524617 / 1.386936 (-0.862319) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008748 / 0.011353 (-0.002605) | 0.004489 / 0.011008 (-0.006519) | 0.100923 / 0.038508 (0.062415) | 0.031436 / 0.023109 (0.008326) | 0.306508 / 0.275898 (0.030610) | 0.365110 / 0.323480 (0.041630) | 0.007161 / 0.007986 (-0.000824) | 0.005489 / 0.004328 (0.001160) | 0.078909 / 0.004250 (0.074658) | 0.036097 / 0.037052 (-0.000955) | 0.307907 / 0.258489 (0.049418) | 0.370277 / 0.293841 (0.076436) | 0.034184 / 0.128546 (-0.094362) | 0.011613 / 0.075646 (-0.064033) | 0.322896 / 0.419271 (-0.096375) | 0.041829 / 0.043533 (-0.001704) | 0.299669 / 0.255139 (0.044530) | 0.322217 / 0.283200 (0.039017) | 0.087751 / 0.141683 (-0.053932) | 1.476277 / 1.452155 (0.024122) | 1.548196 / 1.492716 (0.055480) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183002 / 0.018006 (0.164995) | 0.415627 / 0.000490 (0.415138) | 0.003272 / 0.000200 (0.003072) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024881 / 0.037411 (-0.012531) | 0.103424 / 0.014526 (0.088898) | 0.106446 / 0.176557 (-0.070110) | 0.142806 / 0.737135 (-0.594330) | 0.110938 / 0.296338 (-0.185401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421669 / 0.215209 (0.206460) | 4.207457 / 2.077655 (2.129802) | 1.882176 / 1.504120 (0.378056) | 1.677609 / 1.541195 (0.136415) | 1.734065 / 1.468490 (0.265575) | 0.695915 / 4.584777 (-3.888862) | 3.416731 / 3.745712 (-0.328981) | 1.872575 / 5.269862 (-3.397286) | 1.163612 / 4.565676 (-3.402064) | 0.082710 / 0.424275 (-0.341565) | 0.012659 / 0.007607 (0.005052) | 0.528785 / 0.226044 (0.302741) | 5.305328 / 2.268929 (3.036399) | 2.299850 / 55.444624 (-53.144774) | 1.968137 / 6.876477 (-4.908339) | 2.028326 / 2.142072 (-0.113746) | 0.813157 / 4.805227 (-3.992070) | 0.149997 / 6.500664 (-6.350668) | 0.066739 / 0.075469 (-0.008730) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206332 / 1.841788 (-0.635456) | 13.795510 / 8.074308 (5.721202) | 14.367695 / 10.191392 (4.176303) | 0.138106 / 0.680424 (-0.542318) | 0.028760 / 0.534201 (-0.505441) | 0.394822 / 0.579283 (-0.184461) | 0.403291 / 0.434364 (-0.031073) | 0.463273 / 0.540337 (-0.077065) | 0.540881 / 1.386936 (-0.846055) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006830 / 0.011353 (-0.004523) | 0.004606 / 0.011008 (-0.006402) | 0.097763 / 0.038508 (0.059255) | 0.027832 / 0.023109 (0.004723) | 0.422970 / 0.275898 (0.147072) | 0.460313 / 0.323480 (0.136833) | 0.005110 / 0.007986 (-0.002876) | 0.003428 / 0.004328 (-0.000901) | 0.075047 / 0.004250 (0.070797) | 0.038374 / 0.037052 (0.001322) | 0.422762 / 0.258489 (0.164273) | 0.469886 / 0.293841 (0.176045) | 0.032391 / 0.128546 (-0.096155) | 0.011804 / 0.075646 (-0.063843) | 0.320439 / 0.419271 (-0.098832) | 0.041939 / 0.043533 (-0.001594) | 0.422521 / 0.255139 (0.167382) | 0.446420 / 0.283200 (0.163220) | 0.090715 / 0.141683 (-0.050968) | 1.484578 / 1.452155 (0.032423) | 1.556154 / 1.492716 (0.063438) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260735 / 0.018006 (0.242728) | 0.415586 / 0.000490 (0.415096) | 0.026960 / 0.000200 (0.026760) | 0.000296 / 0.000054 (0.000241) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024926 / 0.037411 (-0.012486) | 0.099651 / 0.014526 (0.085125) | 0.107810 / 0.176557 (-0.068747) | 0.148685 / 0.737135 (-0.588451) | 0.112725 / 0.296338 (-0.183614) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.472669 / 0.215209 (0.257460) | 4.718827 / 2.077655 (2.641172) | 2.475583 / 1.504120 (0.971463) | 2.260862 / 1.541195 (0.719667) | 2.307820 / 1.468490 (0.839330) | 0.699464 / 4.584777 (-3.885313) | 3.376282 / 3.745712 (-0.369431) | 1.872650 / 5.269862 (-3.397211) | 1.176399 / 4.565676 (-3.389277) | 0.082854 / 0.424275 (-0.341421) | 0.012845 / 0.007607 (0.005237) | 0.582088 / 0.226044 (0.356044) | 5.861609 / 2.268929 (3.592681) | 2.930728 / 55.444624 (-52.513896) | 2.624310 / 6.876477 (-4.252167) | 2.762130 / 2.142072 (0.620058) | 0.811902 / 4.805227 (-3.993325) | 0.152516 / 6.500664 (-6.348149) | 0.067670 / 0.075469 (-0.007799) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.289790 / 1.841788 (-0.551997) | 14.267607 / 8.074308 (6.193299) | 14.120655 / 10.191392 (3.929263) | 0.128442 / 0.680424 (-0.551982) | 0.017079 / 0.534201 (-0.517121) | 0.381807 / 0.579283 (-0.197476) | 0.400546 / 0.434364 (-0.033818) | 0.447629 / 0.540337 (-0.092709) | 0.532006 / 1.386936 (-0.854930) |\n\n</details>\n</details>\n\n\n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.