url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4340/comments | https://api.github.com/repos/huggingface/datasets/issues/4340/events | https://github.com/huggingface/datasets/pull/4340 | 1,234,671,025 | PR_kwDODunzps43wY1U | 4,340 | Fix irc_disentangle dataset script | [] | closed | false | null | 1 | 2022-05-13T02:37:57Z | 2022-05-24T15:37:30Z | 2022-05-24T15:37:29Z | null | updated extracted dataset's repo's latest commit hash (included in tarball's name), and updated the related data_infos.json | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4340/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4340/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4340.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4340",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4340.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4340"
} | true | [
"Thanks ! This has been fixed in https://github.com/huggingface/datasets/pull/4377, we can close this PR"
] |
https://api.github.com/repos/huggingface/datasets/issues/433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/433/comments | https://api.github.com/repos/huggingface/datasets/issues/433/events | https://github.com/huggingface/datasets/issues/433 | 665,311,025 | MDU6SXNzdWU2NjUzMTEwMjU= | 433 | How to reuse functionality of a (generic) dataset? | [] | closed | false | null | 4 | 2020-07-24T17:27:37Z | 2022-10-04T17:59:34Z | 2022-10-04T17:59:33Z | null | I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to reuse formats and loading functionality for datasets with a common format?
In my case, it took a bit of time to create the Brat dataset and I think others would appreciate to not have to think about that again. Also, I assume there are other formats (e.g. conll) that are widely used, so having this would really ease dataset onboarding and adoption of the library. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/433/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/433/timeline | null | completed | null | null | false | [
"Hi @ArneBinder, we have a few \"generic\" datasets which are intended to load data files with a predefined format:\r\n- csv: https://github.com/huggingface/nlp/tree/master/datasets/csv\r\n- json: https://github.com/huggingface/nlp/tree/master/datasets/json\r\n- text: https://github.com/huggingface/nlp/tree/master/datasets/text\r\n\r\nYou can find more details about this way to load datasets here in the documentation: https://huggingface.co/nlp/loading_datasets.html#from-local-files\r\n\r\nMaybe your brat loading script could be shared in a similar fashion?",
"> Maybe your brat loading script could be shared in a similar fashion?\r\n\r\n@thomwolf that was also my first idea and I think I will tackle that in the next days. I separated the code and created a real abstract class `AbstractBrat` to allow to inherit from that (I've just seen that the dataset_loader loads the first non abstract class), now `Brat` is very similar in its functionality to https://github.com/huggingface/nlp/tree/master/datasets/text but inherits from `AbstractBrat`.\r\n\r\nHowever, it is still not clear to me how to add a specific dataset (as explained in https://huggingface.co/nlp/add_dataset.html) to your repo that uses this format/abstract class, i.e. re-using the `features` entry of the `DatasetInfo` object and `_generate_examples()`. Again, by doing so, the only remaining entries/functions to define would be `_DESCRIPTION`, `_CITATION`, `homepage` and `_URL` (which is all copy-paste stuff) and `_split_generators()`.\r\n \r\nIn a lack of better ideas, I tried sth like below, but of course it does not work outside `nlp` (`AbstractBrat` is currently defined in [datasets/brat.py](https://github.com/ArneBinder/nlp/blob/5e81fb8710546ee7be3353a7f02a3045e9a8351e/datasets/brat/brat.py)):\r\n```python\r\nfrom __future__ import absolute_import, division, print_function\r\n\r\nimport os\r\n\r\nimport nlp\r\n\r\nfrom datasets.brat.brat import AbstractBrat\r\n\r\n_CITATION = \"\"\"\r\n@inproceedings{lauscher2018b,\r\n title = {An argument-annotated corpus of scientific publications},\r\n booktitle = {Proceedings of the 5th Workshop on Mining Argumentation},\r\n publisher = {Association for Computational Linguistics},\r\n author = {Lauscher, Anne and Glava\\v{s}, Goran and Ponzetto, Simone Paolo},\r\n address = {Brussels, Belgium},\r\n year = {2018},\r\n pages = {40–46}\r\n}\r\n\"\"\"\r\n\r\n_DESCRIPTION = \"\"\"\\\r\nThis dataset is an extension of the Dr. Inventor corpus (Fisas et al., 2015, 2016) with an annotation layer containing \r\nfine-grained argumentative components and relations. It is the first argument-annotated corpus of scientific \r\npublications (in English), which allows for joint analyses of argumentation and other rhetorical dimensions of \r\nscientific writing.\r\n\"\"\"\r\n\r\n_URL = \"http://data.dws.informatik.uni-mannheim.de/sci-arg/compiled_corpus.zip\"\r\n\r\n\r\nclass Sciarg(AbstractBrat):\r\n\r\n VERSION = nlp.Version(\"1.0.0\")\r\n\r\n def _info(self):\r\n\r\n brat_features = super()._info().features\r\n return nlp.DatasetInfo(\r\n # This is the description that will appear on the datasets page.\r\n description=_DESCRIPTION,\r\n # nlp.features.FeatureConnectors\r\n features=brat_features,\r\n # If there's a common (input, target) tuple from the features,\r\n # specify them here. They'll be used if as_supervised=True in\r\n # builder.as_dataset.\r\n #supervised_keys=None,\r\n # Homepage of the dataset for documentation\r\n homepage=\"https://github.com/anlausch/ArguminSci\",\r\n citation=_CITATION,\r\n )\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n # TODO: Downloads the data and defines the splits\r\n # dl_manager is a nlp.download.DownloadManager that can be used to\r\n # download and extract URLs\r\n dl_dir = dl_manager.download_and_extract(_URL)\r\n data_dir = os.path.join(dl_dir, \"compiled_corpus\")\r\n print(f'data_dir: {data_dir}')\r\n return [\r\n nlp.SplitGenerator(\r\n name=nlp.Split.TRAIN,\r\n # These kwargs will be passed to _generate_examples\r\n gen_kwargs={\r\n \"directory\": data_dir,\r\n },\r\n ),\r\n ]\r\n``` \r\n\r\nNevertheless, many thanks for tackling the dataset accessibility problem with this great library!",
"As temporary fix I've created [ArneBinder/nlp-formats](https://github.com/ArneBinder/nlp-formats) (contributions welcome).",
"Hi! You can either copy&paste the builder script and import the builder from there or use `datasets.load_dataset_builder` inside the script and call the methods of the returned builder object."
] |
https://api.github.com/repos/huggingface/datasets/issues/5291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5291/comments | https://api.github.com/repos/huggingface/datasets/issues/5291/events | https://github.com/huggingface/datasets/pull/5291 | 1,462,983,472 | PR_kwDODunzps5DoKNC | 5,291 | [build doc] for v2.7.1 & v2.6.2 | [] | closed | false | null | 2 | 2022-11-24T08:54:47Z | 2022-11-24T09:14:10Z | 2022-11-24T09:11:15Z | null | Do NOT merge. Using this PR to build docs for [v2.7.1](https://github.com/huggingface/datasets/pull/5291/commits/f4914af20700f611b9331a9e3ba34743bbeff934) & [v2.6.2](https://github.com/huggingface/datasets/pull/5291/commits/025f85300a0874eeb90a20393c62f25ac0accaa0) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5291/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5291/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5291.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5291",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5291.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5291"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"doc versions are built https://huggingface.co/docs/datasets/index"
] |
https://api.github.com/repos/huggingface/datasets/issues/2390 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2390/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2390/comments | https://api.github.com/repos/huggingface/datasets/issues/2390/events | https://github.com/huggingface/datasets/pull/2390 | 897,903,642 | MDExOlB1bGxSZXF1ZXN0NjQ5ODQ0NjQ2 | 2,390 | Add check for task templates on dataset load | [] | closed | false | null | 1 | 2021-05-21T10:16:57Z | 2021-05-21T15:49:09Z | 2021-05-21T15:49:06Z | null | This PR adds a check that the features of a dataset match the schema of each compatible task template. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2390/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2390/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2390.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2390",
"merged_at": "2021-05-21T15:49:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2390.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2390"
} | true | [
"LGTM now, thank you =)"
] |
https://api.github.com/repos/huggingface/datasets/issues/5931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5931/comments | https://api.github.com/repos/huggingface/datasets/issues/5931/events | https://github.com/huggingface/datasets/issues/5931 | 1,745,408,784 | I_kwDODunzps5oCNMQ | 5,931 | `datasets.map` not reusing cached copy by default | [] | closed | false | null | 1 | 2023-06-07T09:03:33Z | 2023-06-21T16:15:40Z | 2023-06-21T16:15:40Z | null | ### Describe the bug
When I load the dataset from local directory, it's cached copy is picked up after first time. However, for `map` operation, the operation is applied again and cached copy is not picked up. Is there any way to pick cached copy instead of processing it again? The only solution I could think of was to use `save_to_disk` after my last transform and then use that in my DataLoader pipeline. Are there any other solutions for the same?
One more thing, my dataset is occupying 6GB storage memory after I use `map`, is there any way I can reduce that memory usage?
### Steps to reproduce the bug
```
# make sure that dataset decodes audio with correct sampling rate
dataset_sampling_rate = next(iter(self.raw_datasets.values())).features["audio"].sampling_rate
if dataset_sampling_rate != self.feature_extractor.sampling_rate:
self.raw_datasets = self.raw_datasets.cast_column(
"audio", datasets.features.Audio(sampling_rate=self.feature_extractor.sampling_rate)
)
vectorized_datasets = self.raw_datasets.map(
self.prepare_dataset,
remove_columns=next(iter(self.raw_datasets.values())).column_names,
num_proc=self.num_workers,
desc="preprocess datasets",
)
# filter data that is longer than max_input_length
self.vectorized_datasets = vectorized_datasets.filter(
self.is_audio_in_length_range,
num_proc=self.num_workers,
input_columns=["input_length"],
)
def prepare_dataset(self, batch):
# load audio
sample = batch["audio"]
inputs = self.feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"])
batch["input_values"] = inputs.input_values[0]
batch["input_length"] = len(batch["input_values"])
batch["labels"] = self.tokenizer(batch["target_text"]).input_ids
return batch
```
### Expected behavior
`map` to use cached copy and if possible an alternative technique to reduce memory usage after using `map`
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5931/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5931/timeline | null | completed | null | null | false | [
"This can happen when a map transform cannot be hashed deterministically (e.g., an object referenced by the transform changes its state after the first call - an issue with fast tokenizers). The solution is to provide `cache_file_name` in the `map` call to check this file for the cached result instead of relying on the default caching mechanism."
] |
https://api.github.com/repos/huggingface/datasets/issues/388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/388/comments | https://api.github.com/repos/huggingface/datasets/issues/388/events | https://github.com/huggingface/datasets/issues/388 | 656,707,497 | MDU6SXNzdWU2NTY3MDc0OTc= | 388 | 🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17 | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 5 | 2020-07-14T15:36:41Z | 2022-10-04T18:01:28Z | 2022-10-04T18:01:28Z | null | 1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code:
```
nlp.load_dataset('wmt14','de-en')
nlp.load_dataset('wmt15','de-en')
nlp.load_dataset('wmt17','de-en')
nlp.load_dataset('wmt19','de-en')
```
The code runs but the download speed is **extremely slow**, the same behaviour is not observed on `wmt16` and `wmt18`
2. When trying to download `wmt17 zh-en`, I got the following error:
> ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-zh.tar.gz | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/388/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/388/timeline | null | completed | null | null | false | [
"similar slow download speed here for nlp.load_dataset('wmt14', 'fr-en')\r\n`\r\nDownloading: 100%|██████████████████████████████████████████████████████████| 658M/658M [1:00:42<00:00, 181kB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████| 918M/918M [1:39:38<00:00, 154kB/s]\r\nDownloading: 2%|▉ | 40.9M/2.37G [04:48<5:03:06, 128kB/s]\r\n`\r\nCould we just download a specific subdataset in 'wmt14', such as 'newstest14'? ",
"> The code runs but the download speed is extremely slow, the same behaviour is not observed on wmt16 and wmt18\r\n\r\nThe original source for the files may provide slow download speeds.\r\nWe can probably host these files ourselves.\r\n\r\n> When trying to download wmt17 zh-en, I got the following error:\r\n> ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-zh.tar.gz\r\n\r\nLooks like the file`UNv1.0.en-zh.tar.gz` is missing, or the url changed. We need to fix that\r\n\r\n> Could we just download a specific subdataset in 'wmt14', such as 'newstest14'?\r\n\r\nRight now I don't think it's possible. Maybe @patrickvonplaten knows more about it\r\n",
"Yeah, the download speed is sadly always extremely slow :-/. \r\nI will try to check out the `wmt17 zh-en` bug :-) ",
"Maybe this can be used - https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.00 ",
"These issues seem to be fixed now."
] |
https://api.github.com/repos/huggingface/datasets/issues/750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/750/comments | https://api.github.com/repos/huggingface/datasets/issues/750/events | https://github.com/huggingface/datasets/issues/750 | 726,589,446 | MDU6SXNzdWU3MjY1ODk0NDY= | 750 | load_dataset doesn't include `features` in its hash | [] | closed | false | null | 0 | 2020-10-21T15:16:41Z | 2020-10-29T09:36:01Z | 2020-10-29T09:36:01Z | null | It looks like the function `load_dataset` does not include what's passed in the `features` argument when creating a hash for a given dataset. As a result, if a user includes new features from an already downloaded dataset, those are ignored.
Example: some models on the hub have a different ordering for the labels than what `datasets` uses for MNLI so I'd like to do something along the lines of:
```
dataset = load_dataset("glue", "mnli")
features = dataset["train"].features
features["label"] = ClassLabel(names = ['entailment', 'contradiction', 'neutral']) # new label order
dataset = load_dataset("glue", "mnli", features=features)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/750/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/750/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5531 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5531/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5531/comments | https://api.github.com/repos/huggingface/datasets/issues/5531/events | https://github.com/huggingface/datasets/issues/5531 | 1,584,387,276 | I_kwDODunzps5eb9TM | 5,531 | Invalid Arrow data from JSONL | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 0 | 2023-02-14T15:39:49Z | 2023-02-14T15:46:09Z | null | null | This code fails:
```python
from datasets import Dataset
ds = Dataset.from_json(path_to_file)
ds.data.validate()
```
raises
```python
ArrowInvalid: Column 2: In chunk 1: Invalid: Struct child array #3 invalid: Invalid: Length spanned by list offsets (4064) larger than values array (length 4063)
```
This causes many issues for @TevenLeScao:
- `map` fails because it fails to rewrite invalid arrow arrays
```python
~/Desktop/hf/datasets/src/datasets/arrow_writer.py in write_examples_on_file(self)
438 if all(isinstance(row[0][col], (pa.Array, pa.ChunkedArray)) for row in self.current_examples):
439 arrays = [row[0][col] for row in self.current_examples]
--> 440 batch_examples[col] = array_concat(arrays)
441 else:
442 batch_examples[col] = [
~/Desktop/hf/datasets/src/datasets/table.py in array_concat(arrays)
1885
1886 if not _is_extension_type(array_type):
-> 1887 return pa.concat_arrays(arrays)
1888
1889 def _offsets_concat(offsets):
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.concat_arrays()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowIndexError: array slice would exceed array length
```
- `to_dict()` **segfaults** ⚠️
```python
/Users/runner/work/crossbow/crossbow/arrow/cpp/src/arrow/array/data.cc:99: Check failed: (off) <= (length) Slice offset greater
than array length
```
To reproduce: unzip the archive and run the above code using `sanity_oscar_en.jsonl`
[sanity_oscar_en.jsonl.zip](https://github.com/huggingface/datasets/files/10734124/sanity_oscar_en.jsonl.zip)
PS: reading using pandas and converting to Arrow works though (note that the dataset lives in RAM in this case):
```python
ds = Dataset.from_pandas(pd.read_json(path_to_file, lines=True))
ds.data.validate()
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5531/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5531/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2210 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2210/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2210/comments | https://api.github.com/repos/huggingface/datasets/issues/2210/events | https://github.com/huggingface/datasets/issues/2210 | 855,709,400 | MDU6SXNzdWU4NTU3MDk0MDA= | 2,210 | dataloading slow when using HUGE dataset | [] | closed | false | null | 2 | 2021-04-12T08:33:02Z | 2021-04-13T02:03:05Z | 2021-04-13T02:03:05Z | null | Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training.
When looking at the pytorch-lightning supported profile of two different runs, I see that fetching a batch(`get_train_batch`) consumes an unreasonable amount of time when data is large. What could be the cause?
* 60GB data
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 200.33 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
run_training_epoch | 71.994 |1 | 71.994 | 35.937 |
run_training_batch | 0.64373 |100 | 64.373 | 32.133 |
optimizer_step_and_closure_0 | 0.64322 |100 | 64.322 | 32.108 |
training_step_and_backward | 0.61004 |100 | 61.004 | 30.452 |
model_backward | 0.37552 |100 | 37.552 | 18.745 |
model_forward | 0.22813 |100 | 22.813 | 11.387 |
training_step | 0.22759 |100 | 22.759 | 11.361 |
get_train_batch | 0.066385 |100 | 6.6385 | 3.3138 |
```
* 600GB data
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 3285.6 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
run_training_epoch | 1397.9 |1 | 1397.9 | 42.546 |
run_training_batch | 7.2596 |100 | 725.96 | 22.095 |
optimizer_step_and_closure_0 | 7.2589 |100 | 725.89 | 22.093 |
training_step_and_backward | 7.223 |100 | 722.3 | 21.984 |
model_backward | 6.9662 |100 | 696.62 | 21.202 |
get_train_batch | 6.322 |100 | 632.2 | 19.241 |
model_forward | 0.24902 |100 | 24.902 | 0.75789 |
training_step | 0.2485 |100 | 24.85 | 0.75633 |
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2210/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2210/timeline | null | completed | null | null | false | [
"Hi ! Yes this is an issue with `datasets<=1.5.0`\r\nThis issue has been fixed by #2122 , we'll do a new release soon :)\r\nFor now you can test it on the `master` branch.",
"Hi, thank you for your answer. I did not realize that my issue stems from the same problem. "
] |
https://api.github.com/repos/huggingface/datasets/issues/5698 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5698/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5698/comments | https://api.github.com/repos/huggingface/datasets/issues/5698/events | https://github.com/huggingface/datasets/issues/5698 | 1,652,183,611 | I_kwDODunzps5ielI7 | 5,698 | Add Qdrant as another search index | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 1 | 2023-04-03T14:25:19Z | 2023-04-11T10:28:40Z | null | null | ### Feature request
I'd suggest adding Qdrant (https://qdrant.tech) as another search index available, so users can directly build an index from a dataset. Currently, FAISS and ElasticSearch are only supported: https://huggingface.co/docs/datasets/faiss_es
### Motivation
ElasticSearch is a keyword-based search system, while FAISS is a vector search library. Vector database, such as Qdrant, is a different tool based on similarity (like FAISS) but is not limited to a single machine. It makes the vector database well-suited for bigger datasets and collaboration if several people want to access a particular dataset.
### Your contribution
I can provide a PR implementing that functionality on my own. | {
"+1": 6,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5698/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5698/timeline | null | null | null | null | false | [
"@mariosasko I'd appreciate your feedback on this. "
] |
https://api.github.com/repos/huggingface/datasets/issues/4337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4337/comments | https://api.github.com/repos/huggingface/datasets/issues/4337/events | https://github.com/huggingface/datasets/pull/4337 | 1,234,470,083 | PR_kwDODunzps43vuzF | 4,337 | Eval metadata batch 3: Reddit, Rotten Tomatoes, SemEval 2010, Sentiment 140, SMS Spam, Snips, SQuAD, SQuAD v2, Timit ASR | [] | closed | false | null | 2 | 2022-05-12T20:52:02Z | 2022-05-16T16:26:19Z | 2022-05-16T16:18:30Z | null | Adding evaluation metadata for:
- Reddit
- Rotten Tomatoes
- SemEval 2010
- Sentiment 140
- SMS Spam
- Snips
- SQuAD
- SQuAD v2
- Timit ASR | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4337/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4337/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4337.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4337",
"merged_at": "2022-05-16T16:18:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4337.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4337"
} | true | [
"Summary of CircleCI errors:\r\n\r\n- **sem_eval_2010_task_8**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sms_spam**: `Data Instances` and`Data Splits` are empty.\r\n- **Quora** : Expected some content in section `Citation Information` but it is empty, missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sentiment140**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n\r\nThere are also some timeout errors, I don't really understand the source though :confused: ",
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2407/comments | https://api.github.com/repos/huggingface/datasets/issues/2407/events | https://github.com/huggingface/datasets/issues/2407 | 903,111,755 | MDU6SXNzdWU5MDMxMTE3NTU= | 2,407 | .map() function got an unexpected keyword argument 'cache_file_name' | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2021-05-27T01:54:26Z | 2021-05-27T13:46:40Z | 2021-05-27T13:46:40Z | null | ## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that ".map() function got an unexpected keyword argument 'cache_file_name'".
I believe I'm using the latest dataset 1.6.2. Also seems like the document and the actual code indicates there is an argument 'cache_file_name' for the .map() function.
Here is the code I use
## Steps to reproduce the bug
```datasets = load_from_disk(dataset_path=my_path)
[...]
def tokenize_function(examples):
return tokenizer(examples[text_column_name])
logger.info("Mapping dataset to tokenized dataset.")
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=True,
cache_file_name="my_tokenized_file"
)
```
## Actual results
tokenized_datasets = datasets.map(
TypeError: map() got an unexpected keyword argument 'cache_file_name'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:1.6.2
- Platform:Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.10
- Python version:3.8.5
- PyArrow version:3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2407/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2407/timeline | null | completed | null | null | false | [
"Hi @cindyxinyiwang,\r\nDid you try adding `.arrow` after `cache_file_name` argument? Here I think they're expecting something like that only for a cache file:\r\nhttps://github.com/huggingface/datasets/blob/e08362256fb157c0b3038437fc0d7a0bbb50de5c/src/datasets/arrow_dataset.py#L1556-L1558",
"Hi ! `cache_file_name` is an argument of the `Dataset.map` method. Can you check that your `dataset` is indeed a `Dataset` object ?\r\n\r\nIf you loaded several splits, then it would actually be a `DatasetDict` (one dataset per split, in a dictionary).\r\nIn this case, since there are several datasets in the dict, the `DatasetDict.map` method requires a `cache_file_names` argument (with an 's'), so that you can provide one file name per split.",
"I think you are right. I used cache_file_names={data1: name1, data2: name2} and it works. Thank you!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3103/comments | https://api.github.com/repos/huggingface/datasets/issues/3103/events | https://github.com/huggingface/datasets/pull/3103 | 1,029,069,310 | PR_kwDODunzps4tUzJQ | 3,103 | Fix project description in PyPI | [] | closed | false | null | 0 | 2021-10-18T12:47:29Z | 2021-10-18T12:59:57Z | 2021-10-18T12:59:56Z | null | Fix project description appearing in PyPI, so that it contains the content of the README.md file (like transformers).
Currently, `datasets` project description appearing in PyPI shows the release instructions addressed to core maintainers: https://pypi.org/project/datasets/1.13.3/
Fix #3102. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3103/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3103/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3103.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3103",
"merged_at": "2021-10-18T12:59:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3103.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3103"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1080/comments | https://api.github.com/repos/huggingface/datasets/issues/1080/events | https://github.com/huggingface/datasets/pull/1080 | 756,663,464 | MDExOlB1bGxSZXF1ZXN0NTMyMTc3NDg5 | 1,080 | Add WikiANN NER dataset | [] | closed | false | null | 1 | 2020-12-03T23:09:24Z | 2020-12-06T17:18:55Z | 2020-12-06T17:18:55Z | null | This PR adds the full set of 176 languages from the balanced train/dev/test splits of WikiANN / PAN-X from: https://github.com/afshinrahimi/mmner
Until now, only 40 of these languages were available in `datasets` as part of the XTREME benchmark
Courtesy of the dataset author, we can now download this dataset from a Dropbox URL without needing a manual download anymore 🥳, so at some point it would be worth updating the PAN-X subset of XTREME as well 😄
Link to gist with some snippets for producing dummy data: https://gist.github.com/lewtun/5b93294ab6dbcf59d1493dbe2cfd6bb9
P.S. @yjernite I think I was confused about needing to generate a set of YAML tags per config, so ended up just adding a single one in the README. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1080/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1080/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1080.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1080",
"merged_at": "2020-12-06T17:18:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1080.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1080"
} | true | [
"Dataset card added, so ready for review!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3652 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3652/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3652/comments | https://api.github.com/repos/huggingface/datasets/issues/3652/events | https://github.com/huggingface/datasets/pull/3652 | 1,118,808,738 | PR_kwDODunzps4xzinr | 3,652 | sp. Columbia => Colombia | [] | closed | false | null | 2 | 2022-01-31T00:41:03Z | 2022-02-09T16:55:25Z | 2022-01-31T08:29:07Z | null | "Columbia" is various places in North America. The country is "Colombia". | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3652/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3652/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3652.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3652",
"merged_at": "2022-01-31T08:29:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3652.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3652"
} | true | [
"The original openslr site mixed both names https://openslr.org/72/ :-)",
"Yeah, I filed the issue to have it fixed there last year, but it looks like they missed a few."
] |
https://api.github.com/repos/huggingface/datasets/issues/739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/739/comments | https://api.github.com/repos/huggingface/datasets/issues/739/events | https://github.com/huggingface/datasets/pull/739 | 723,044,066 | MDExOlB1bGxSZXF1ZXN0NTA0Njk5NTY3 | 739 | Add wiki dpr multiset embeddings | [] | closed | false | null | 3 | 2020-10-16T09:05:49Z | 2020-11-26T14:02:50Z | 2020-11-26T14:02:49Z | null | There are two DPR encoders, one trained on Natural Questions and one trained on a multiset/hybrid dataset.
Previously only the embeddings from the encoder trained on NQ were available. I'm adding the ones from the encoder trained on the multiset/hybrid dataset.
In the configuration you can now specify `embeddings_name="nq"` or `embeddings_name="multiset"` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/739/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/739/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/739.diff",
"html_url": "https://github.com/huggingface/datasets/pull/739",
"merged_at": "2020-11-26T14:02:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/739.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/739"
} | true | [
"I still have to compute the dataset_infos, and build + host the indexes",
"update: I'm computing the metadata, will update the PR soon",
"Finally all green and ready to merge :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/334/comments | https://api.github.com/repos/huggingface/datasets/issues/334/events | https://github.com/huggingface/datasets/pull/334 | 649,661,791 | MDExOlB1bGxSZXF1ZXN0NDQzMjk1NjQ0 | 334 | Add dataset.shard() method | [] | closed | false | null | 1 | 2020-07-02T06:05:19Z | 2020-07-06T12:35:36Z | 2020-07-06T12:35:36Z | null | Fixes https://github.com/huggingface/nlp/issues/312 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/334/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/334/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/334.diff",
"html_url": "https://github.com/huggingface/datasets/pull/334",
"merged_at": "2020-07-06T12:35:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/334.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/334"
} | true | [
"Great, done!"
] |
https://api.github.com/repos/huggingface/datasets/issues/724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/724/comments | https://api.github.com/repos/huggingface/datasets/issues/724/events | https://github.com/huggingface/datasets/issues/724 | 718,947,700 | MDU6SXNzdWU3MTg5NDc3MDA= | 724 | need to redirect /nlp to /datasets and remove outdated info | [] | closed | false | null | 4 | 2020-10-11T23:12:12Z | 2020-10-14T17:00:12Z | 2020-10-14T17:00:12Z | null | It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had the links marked up, the new one is just a jumble of text in one chunk and no markup for links (i.e. not clickable). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/724/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/724/timeline | null | completed | null | null | false | [
"Should be fixed now: \r\n\r\n\r\n\r\nNot sure I understand what you mean by the second part?\r\n",
"Thank you!\r\n\r\n> Not sure I understand what you mean by the second part?\r\n\r\nCompare the 2:\r\n* https://huggingface.co/datasets/wikihow\r\n* https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all\r\nCan you see the difference? 2nd has formatting, 1st doesn't.\r\n",
"For context, those are two different pages (not an old vs new one), one is from the dataset viewer (you can browse data inside the datasets) while the other is just a basic reference page displayed some metadata about the dataset.\r\n\r\nFor the second one, we'll move to markdown parsing soon, so it'll be formatted better.",
"I understand. I was just flagging the lack of markup issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/4931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4931/comments | https://api.github.com/repos/huggingface/datasets/issues/4931/events | https://github.com/huggingface/datasets/pull/4931 | 1,362,298,764 | PR_kwDODunzps4-Y3L6 | 4,931 | Fix missing tags in dataset cards | [] | closed | false | null | 1 | 2022-09-05T17:03:04Z | 2022-09-22T12:40:15Z | 2022-09-06T05:39:29Z | null | Fix missing tags in dataset cards:
- coqa
- hyperpartisan_news_detection
- opinosis
- scientific_papers
- scifact
- search_qa
- wiki_qa
- wiki_split
- wikisql
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896
- #4908
- #4921 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4931/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4931/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4931.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4931",
"merged_at": "2022-09-06T05:39:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4931.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4931"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2084/comments | https://api.github.com/repos/huggingface/datasets/issues/2084/events | https://github.com/huggingface/datasets/issues/2084 | 835,750,671 | MDU6SXNzdWU4MzU3NTA2NzE= | 2,084 | CUAD - Contract Understanding Atticus Dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 1 | 2021-03-19T09:27:43Z | 2021-04-16T08:50:44Z | 2021-04-16T08:50:44Z | null | ## Adding a Dataset
- **Name:** CUAD - Contract Understanding Atticus Dataset
- **Description:** As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.
- **Paper:** https://arxiv.org/abs/2103.06268
- **Data:** https://github.com/TheAtticusProject/cuad/
- **Motivation:** good domain specific datasets are valuable
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2084/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2084/timeline | null | completed | null | null | false | [
"+1 on this request"
] |
https://api.github.com/repos/huggingface/datasets/issues/2450 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2450/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2450/comments | https://api.github.com/repos/huggingface/datasets/issues/2450/events | https://github.com/huggingface/datasets/issues/2450 | 912,890,291 | MDU6SXNzdWU5MTI4OTAyOTE= | 2,450 | BLUE file not found | [] | closed | false | null | 2 | 2021-06-06T17:01:54Z | 2021-06-07T10:46:15Z | 2021-06-07T10:46:15Z | null | Hi, I'm having the following issue when I try to load the `blue` metric.
```shell
import datasets
metric = datasets.load_metric('blue')
Traceback (most recent call last):
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 320, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.7.0/metrics/blue/blue.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 332, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/metrics/blue/blue.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 605, in load_metric
dataset=False,
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 343, in prepare_module
combined_path, github_file_path
FileNotFoundError: Couldn't find file locally at blue/blue.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.7.0/metrics/blue/blue.py.
The file is also not present on the master branch on github.
```
Here is dataset installed version info
```shell
pip freeze | grep datasets
datasets==1.7.0
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2450/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2450/timeline | null | completed | null | null | false | [
"Hi ! The `blue` metric doesn't exist, but the `bleu` metric does.\r\nYou can get the full list of metrics [here](https://github.com/huggingface/datasets/tree/master/metrics) or by running\r\n```python\r\nfrom datasets import list_metrics\r\n\r\nprint(list_metrics())\r\n```",
"Ah, my mistake. Thanks for correcting"
] |
https://api.github.com/repos/huggingface/datasets/issues/5552 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5552/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5552/comments | https://api.github.com/repos/huggingface/datasets/issues/5552/events | https://github.com/huggingface/datasets/pull/5552 | 1,592,186,703 | PR_kwDODunzps5KXMjA | 5,552 | Make tiktoken tokenizers hashable | [] | closed | false | null | 4 | 2023-02-20T16:50:09Z | 2023-02-21T13:20:42Z | 2023-02-21T13:13:05Z | null | Fix for https://discord.com/channels/879548962464493619/1075729627546406912/1075729627546406912
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5552/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5552/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5552.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5552",
"merged_at": "2023-02-21T13:13:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5552.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5552"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011635 / 0.011353 (0.000282) | 0.005446 / 0.011008 (-0.005562) | 0.111044 / 0.038508 (0.072536) | 0.034243 / 0.023109 (0.011134) | 0.357560 / 0.275898 (0.081662) | 0.403940 / 0.323480 (0.080460) | 0.008532 / 0.007986 (0.000546) | 0.004327 / 0.004328 (-0.000002) | 0.084659 / 0.004250 (0.080408) | 0.040914 / 0.037052 (0.003861) | 0.367142 / 0.258489 (0.108653) | 0.381651 / 0.293841 (0.087810) | 0.053865 / 0.128546 (-0.074681) | 0.019060 / 0.075646 (-0.056587) | 0.371994 / 0.419271 (-0.047277) | 0.058417 / 0.043533 (0.014884) | 0.357740 / 0.255139 (0.102601) | 0.367423 / 0.283200 (0.084224) | 0.104336 / 0.141683 (-0.037347) | 1.632128 / 1.452155 (0.179974) | 1.676216 / 1.492716 (0.183499) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199649 / 0.018006 (0.181642) | 0.490945 / 0.000490 (0.490455) | 0.001598 / 0.000200 (0.001398) | 0.000094 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024541 / 0.037411 (-0.012871) | 0.104713 / 0.014526 (0.090187) | 0.119438 / 0.176557 (-0.057118) | 0.160854 / 0.737135 (-0.576281) | 0.127323 / 0.296338 (-0.169016) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.586483 / 0.215209 (0.371274) | 5.771689 / 2.077655 (3.694034) | 2.378962 / 1.504120 (0.874842) | 1.998787 / 1.541195 (0.457592) | 1.993016 / 1.468490 (0.524526) | 1.199169 / 4.584777 (-3.385608) | 5.281648 / 3.745712 (1.535936) | 5.589235 / 5.269862 (0.319373) | 2.715162 / 4.565676 (-1.850514) | 0.153312 / 0.424275 (-0.270963) | 0.014302 / 0.007607 (0.006695) | 0.761185 / 0.226044 (0.535140) | 7.602517 / 2.268929 (5.333589) | 3.095271 / 55.444624 (-52.349354) | 2.407394 / 6.876477 (-4.469083) | 2.519074 / 2.142072 (0.377002) | 1.459270 / 4.805227 (-3.345957) | 0.259578 / 6.500664 (-6.241086) | 0.077356 / 0.075469 (0.001887) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.502123 / 1.841788 (-0.339665) | 16.254010 / 8.074308 (8.179702) | 19.971713 / 10.191392 (9.780321) | 0.221491 / 0.680424 (-0.458933) | 0.043959 / 0.534201 (-0.490242) | 0.512566 / 0.579283 (-0.066717) | 0.594724 / 0.434364 (0.160360) | 0.573855 / 0.540337 (0.033518) | 0.680503 / 1.386936 (-0.706433) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008543 / 0.011353 (-0.002810) | 0.005828 / 0.011008 (-0.005180) | 0.083696 / 0.038508 (0.045188) | 0.036186 / 0.023109 (0.013077) | 0.379777 / 0.275898 (0.103879) | 0.437361 / 0.323480 (0.113881) | 0.006788 / 0.007986 (-0.001197) | 0.005110 / 0.004328 (0.000782) | 0.106075 / 0.004250 (0.101824) | 0.048770 / 0.037052 (0.011718) | 0.390770 / 0.258489 (0.132281) | 0.420813 / 0.293841 (0.126972) | 0.050622 / 0.128546 (-0.077924) | 0.019939 / 0.075646 (-0.055707) | 0.106890 / 0.419271 (-0.312382) | 0.070800 / 0.043533 (0.027267) | 0.406094 / 0.255139 (0.150955) | 0.419796 / 0.283200 (0.136597) | 0.107237 / 0.141683 (-0.034446) | 1.687894 / 1.452155 (0.235739) | 1.735680 / 1.492716 (0.242963) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216403 / 0.018006 (0.198397) | 0.495002 / 0.000490 (0.494512) | 0.004841 / 0.000200 (0.004641) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.043774 / 0.037411 (0.006363) | 0.119144 / 0.014526 (0.104618) | 0.143694 / 0.176557 (-0.032862) | 0.195548 / 0.737135 (-0.541587) | 0.151426 / 0.296338 (-0.144912) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.617694 / 0.215209 (0.402485) | 6.216237 / 2.077655 (4.138582) | 2.578341 / 1.504120 (1.074221) | 2.184868 / 1.541195 (0.643673) | 2.244954 / 1.468490 (0.776464) | 1.236072 / 4.584777 (-3.348705) | 5.257919 / 3.745712 (1.512207) | 4.634682 / 5.269862 (-0.635180) | 2.722579 / 4.565676 (-1.843097) | 0.131433 / 0.424275 (-0.292843) | 0.012928 / 0.007607 (0.005321) | 0.768315 / 0.226044 (0.542270) | 7.625277 / 2.268929 (5.356349) | 3.146364 / 55.444624 (-52.298260) | 2.577886 / 6.876477 (-4.298590) | 2.572626 / 2.142072 (0.430554) | 1.468160 / 4.805227 (-3.337067) | 0.252524 / 6.500664 (-6.248140) | 0.083264 / 0.075469 (0.007794) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.452614 / 1.841788 (-0.389174) | 15.906162 / 8.074308 (7.831853) | 17.803630 / 10.191392 (7.612238) | 0.210769 / 0.680424 (-0.469655) | 0.024672 / 0.534201 (-0.509529) | 0.486486 / 0.579283 (-0.092797) | 0.545256 / 0.434364 (0.110892) | 0.598736 / 0.540337 (0.058399) | 0.689083 / 1.386936 (-0.697853) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008806 / 0.011353 (-0.002547) | 0.004947 / 0.011008 (-0.006061) | 0.098559 / 0.038508 (0.060051) | 0.034293 / 0.023109 (0.011183) | 0.311924 / 0.275898 (0.036026) | 0.377501 / 0.323480 (0.054021) | 0.007916 / 0.007986 (-0.000069) | 0.004131 / 0.004328 (-0.000197) | 0.074934 / 0.004250 (0.070684) | 0.043396 / 0.037052 (0.006344) | 0.344788 / 0.258489 (0.086299) | 0.369943 / 0.293841 (0.076102) | 0.036846 / 0.128546 (-0.091700) | 0.011803 / 0.075646 (-0.063843) | 0.331306 / 0.419271 (-0.087965) | 0.047015 / 0.043533 (0.003483) | 0.305890 / 0.255139 (0.050751) | 0.332658 / 0.283200 (0.049459) | 0.101134 / 0.141683 (-0.040549) | 1.485615 / 1.452155 (0.033461) | 1.510230 / 1.492716 (0.017514) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274272 / 0.018006 (0.256266) | 0.514739 / 0.000490 (0.514250) | 0.003433 / 0.000200 (0.003234) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027054 / 0.037411 (-0.010357) | 0.106416 / 0.014526 (0.091890) | 0.118761 / 0.176557 (-0.057796) | 0.156115 / 0.737135 (-0.581021) | 0.123801 / 0.296338 (-0.172537) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403121 / 0.215209 (0.187912) | 4.008806 / 2.077655 (1.931151) | 1.891253 / 1.504120 (0.387133) | 1.698523 / 1.541195 (0.157328) | 1.778533 / 1.468490 (0.310043) | 0.688207 / 4.584777 (-3.896570) | 3.674350 / 3.745712 (-0.071362) | 1.848438 / 5.269862 (-3.421423) | 1.202380 / 4.565676 (-3.363297) | 0.073490 / 0.424275 (-0.350785) | 0.010655 / 0.007607 (0.003048) | 0.446939 / 0.226044 (0.220894) | 4.478489 / 2.268929 (2.209560) | 1.992281 / 55.444624 (-53.452343) | 1.684077 / 6.876477 (-5.192400) | 1.715435 / 2.142072 (-0.426638) | 0.731454 / 4.805227 (-4.073773) | 0.143679 / 6.500664 (-6.356985) | 0.053415 / 0.075469 (-0.022054) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.060583 / 1.841788 (-0.781205) | 13.730462 / 8.074308 (5.656153) | 13.038976 / 10.191392 (2.847583) | 0.144168 / 0.680424 (-0.536256) | 0.025788 / 0.534201 (-0.508413) | 0.393332 / 0.579283 (-0.185951) | 0.409495 / 0.434364 (-0.024869) | 0.523745 / 0.540337 (-0.016592) | 0.601595 / 1.386936 (-0.785341) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006369 / 0.011353 (-0.004983) | 0.005019 / 0.011008 (-0.005990) | 0.065226 / 0.038508 (0.026718) | 0.029634 / 0.023109 (0.006524) | 0.302871 / 0.275898 (0.026972) | 0.331055 / 0.323480 (0.007575) | 0.005470 / 0.007986 (-0.002516) | 0.005372 / 0.004328 (0.001043) | 0.064930 / 0.004250 (0.060680) | 0.046979 / 0.037052 (0.009927) | 0.305633 / 0.258489 (0.047144) | 0.345305 / 0.293841 (0.051464) | 0.032951 / 0.128546 (-0.095596) | 0.011447 / 0.075646 (-0.064199) | 0.077054 / 0.419271 (-0.342218) | 0.045744 / 0.043533 (0.002211) | 0.303446 / 0.255139 (0.048307) | 0.319837 / 0.283200 (0.036637) | 0.098631 / 0.141683 (-0.043052) | 1.266593 / 1.452155 (-0.185562) | 1.355388 / 1.492716 (-0.137328) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291301 / 0.018006 (0.273295) | 0.537848 / 0.000490 (0.537359) | 0.006697 / 0.000200 (0.006497) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027677 / 0.037411 (-0.009734) | 0.099633 / 0.014526 (0.085107) | 0.110626 / 0.176557 (-0.065931) | 0.144724 / 0.737135 (-0.592412) | 0.114955 / 0.296338 (-0.181383) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.375344 / 0.215209 (0.160135) | 3.717490 / 2.077655 (1.639835) | 1.845886 / 1.504120 (0.341766) | 1.713274 / 1.541195 (0.172079) | 1.761286 / 1.468490 (0.292796) | 0.627924 / 4.584777 (-3.956853) | 3.628154 / 3.745712 (-0.117558) | 3.261851 / 5.269862 (-2.008011) | 1.701008 / 4.565676 (-2.864669) | 0.076703 / 0.424275 (-0.347572) | 0.010839 / 0.007607 (0.003231) | 0.459193 / 0.226044 (0.233148) | 4.589066 / 2.268929 (2.320137) | 2.193972 / 55.444624 (-53.250653) | 1.892115 / 6.876477 (-4.984362) | 1.892453 / 2.142072 (-0.249619) | 0.745727 / 4.805227 (-4.059500) | 0.150232 / 6.500664 (-6.350432) | 0.057245 / 0.075469 (-0.018224) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.114657 / 1.841788 (-0.727131) | 13.595215 / 8.074308 (5.520907) | 12.267177 / 10.191392 (2.075785) | 0.151362 / 0.680424 (-0.529061) | 0.015609 / 0.534201 (-0.518591) | 0.379151 / 0.579283 (-0.200132) | 0.386125 / 0.434364 (-0.048238) | 0.470037 / 0.540337 (-0.070301) | 0.562340 / 1.386936 (-0.824596) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009847 / 0.011353 (-0.001505) | 0.005609 / 0.011008 (-0.005399) | 0.101951 / 0.038508 (0.063443) | 0.038082 / 0.023109 (0.014972) | 0.299933 / 0.275898 (0.024035) | 0.377081 / 0.323480 (0.053601) | 0.008900 / 0.007986 (0.000915) | 0.004608 / 0.004328 (0.000279) | 0.077723 / 0.004250 (0.073473) | 0.048592 / 0.037052 (0.011540) | 0.310789 / 0.258489 (0.052300) | 0.345627 / 0.293841 (0.051787) | 0.038716 / 0.128546 (-0.089830) | 0.012653 / 0.075646 (-0.062993) | 0.336885 / 0.419271 (-0.082387) | 0.048715 / 0.043533 (0.005182) | 0.295336 / 0.255139 (0.040197) | 0.316735 / 0.283200 (0.033536) | 0.115142 / 0.141683 (-0.026541) | 1.480332 / 1.452155 (0.028177) | 1.604972 / 1.492716 (0.112256) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.299516 / 0.018006 (0.281510) | 0.525892 / 0.000490 (0.525402) | 0.002246 / 0.000200 (0.002046) | 0.000095 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031547 / 0.037411 (-0.005864) | 0.120611 / 0.014526 (0.106085) | 0.124516 / 0.176557 (-0.052041) | 0.166036 / 0.737135 (-0.571100) | 0.131689 / 0.296338 (-0.164650) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400728 / 0.215209 (0.185519) | 4.007027 / 2.077655 (1.929372) | 1.793922 / 1.504120 (0.289803) | 1.596709 / 1.541195 (0.055514) | 1.752130 / 1.468490 (0.283640) | 0.717464 / 4.584777 (-3.867313) | 3.798844 / 3.745712 (0.053132) | 3.685088 / 5.269862 (-1.584774) | 1.914041 / 4.565676 (-2.651636) | 0.086181 / 0.424275 (-0.338094) | 0.012753 / 0.007607 (0.005146) | 0.507984 / 0.226044 (0.281940) | 5.086255 / 2.268929 (2.817326) | 2.280650 / 55.444624 (-53.163974) | 1.929294 / 6.876477 (-4.947183) | 2.057884 / 2.142072 (-0.084188) | 0.852863 / 4.805227 (-3.952364) | 0.165497 / 6.500664 (-6.335168) | 0.063356 / 0.075469 (-0.012113) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212593 / 1.841788 (-0.629194) | 16.270507 / 8.074308 (8.196199) | 15.708406 / 10.191392 (5.517014) | 0.162346 / 0.680424 (-0.518078) | 0.029702 / 0.534201 (-0.504499) | 0.447685 / 0.579283 (-0.131598) | 0.449361 / 0.434364 (0.014997) | 0.530536 / 0.540337 (-0.009801) | 0.613439 / 1.386936 (-0.773497) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007741 / 0.011353 (-0.003612) | 0.005752 / 0.011008 (-0.005256) | 0.076600 / 0.038508 (0.038092) | 0.034841 / 0.023109 (0.011732) | 0.345106 / 0.275898 (0.069208) | 0.385685 / 0.323480 (0.062205) | 0.006466 / 0.007986 (-0.001519) | 0.005806 / 0.004328 (0.001478) | 0.075110 / 0.004250 (0.070860) | 0.052936 / 0.037052 (0.015883) | 0.343576 / 0.258489 (0.085087) | 0.408749 / 0.293841 (0.114908) | 0.037345 / 0.128546 (-0.091201) | 0.012807 / 0.075646 (-0.062839) | 0.087732 / 0.419271 (-0.331540) | 0.050218 / 0.043533 (0.006685) | 0.338963 / 0.255139 (0.083824) | 0.361629 / 0.283200 (0.078429) | 0.107488 / 0.141683 (-0.034195) | 1.465284 / 1.452155 (0.013130) | 1.562218 / 1.492716 (0.069502) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.322496 / 0.018006 (0.304489) | 0.522782 / 0.000490 (0.522292) | 0.006680 / 0.000200 (0.006480) | 0.000144 / 0.000054 (0.000090) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031801 / 0.037411 (-0.005611) | 0.116839 / 0.014526 (0.102313) | 0.127552 / 0.176557 (-0.049005) | 0.167670 / 0.737135 (-0.569465) | 0.134170 / 0.296338 (-0.162168) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425449 / 0.215209 (0.210240) | 4.229367 / 2.077655 (2.151713) | 2.014663 / 1.504120 (0.510543) | 1.812981 / 1.541195 (0.271787) | 1.964039 / 1.468490 (0.495549) | 0.703454 / 4.584777 (-3.881323) | 3.786985 / 3.745712 (0.041273) | 2.262377 / 5.269862 (-3.007485) | 1.404868 / 4.565676 (-3.160808) | 0.086234 / 0.424275 (-0.338041) | 0.012616 / 0.007607 (0.005009) | 0.525784 / 0.226044 (0.299739) | 5.268295 / 2.268929 (2.999366) | 2.496674 / 55.444624 (-52.947950) | 2.177773 / 6.876477 (-4.698704) | 2.313677 / 2.142072 (0.171605) | 0.846202 / 4.805227 (-3.959026) | 0.170152 / 6.500664 (-6.330513) | 0.066772 / 0.075469 (-0.008698) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254719 / 1.841788 (-0.587069) | 16.017627 / 8.074308 (7.943319) | 14.560583 / 10.191392 (4.369191) | 0.168275 / 0.680424 (-0.512149) | 0.017935 / 0.534201 (-0.516266) | 0.430806 / 0.579283 (-0.148477) | 0.428737 / 0.434364 (-0.005626) | 0.532001 / 0.540337 (-0.008336) | 0.633680 / 1.386936 (-0.753256) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4099 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4099/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4099/comments | https://api.github.com/repos/huggingface/datasets/issues/4099/events | https://github.com/huggingface/datasets/issues/4099 | 1,193,253,768 | I_kwDODunzps5HH5uI | 4,099 | UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128) | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2022-04-05T14:42:38Z | 2022-04-06T06:37:44Z | 2022-04-06T06:35:54Z | null | ## Describe the bug
Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("nielsr/XFUN", "xfun.ja")
```
## Expected results
Dataset should be downloaded without exceptions
## Actual results
Stack trace (for the second-time execution):
Downloading and preparing dataset xfun/xfun.ja to /root/.cache/huggingface/datasets/nielsr___xfun/xfun.ja/0.0.0/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477...
Downloading data files: 100%
2/2 [00:00<00:00, 88.48it/s]
Extracting data files: 100%
2/2 [00:00<00:00, 79.60it/s]
UnicodeDecodeErrorTraceback (most recent call last)
<ipython-input-31-79c26bd1109c> in <module>
1 from datasets import load_dataset
2
----> 3 datasets = load_dataset("nielsr/XFUN", "xfun.ja")
/usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
604 )
605
--> 606 # By default, return all splits
607 if split is None:
608 split = {s: s for s in self.info.splits}
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
692 Args:
693 split: `datasets.Split` which subset of the data to read.
--> 694
695 Returns:
696 `Dataset`
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys)
/usr/local/lib/python3.6/dist-packages/tqdm/notebook.py in __iter__(self)
252 if not self.disable:
253 self.display(check_delay=False)
--> 254
255 def __iter__(self):
256 try:
/usr/local/lib/python3.6/dist-packages/tqdm/std.py in __iter__(self)
1183 for obj in iterable:
1184 yield obj
-> 1185 return
1186
1187 mininterval = self.mininterval
~/.cache/huggingface/modules/datasets_modules/datasets/nielsr--XFUN/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477/XFUN.py in _generate_examples(self, filepaths)
140 logger.info("Generating examples from = %s", filepath)
141 with open(filepath[0], "r") as f:
--> 142 data = json.load(f)
143
144 for doc in data["documents"]:
/usr/lib/python3.6/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
294
295 """
--> 296 return loads(fp.read(),
297 cls=cls, object_hook=object_hook,
298 parse_float=parse_float, parse_int=parse_int,
/usr/lib/python3.6/encodings/ascii.py in decode(self, input, final)
24 class IncrementalDecoder(codecs.IncrementalDecoder):
25 def decode(self, input, final=False):
---> 26 return codecs.ascii_decode(input, self.errors)[0]
27
28 class StreamWriter(Codec,codecs.StreamWriter):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0 (but reproduced with many previous versions)
- Platform: Docker: Linux da5b74136d6b 5.3.0-1031-azure #32~18.04.1-Ubuntu SMP Mon Jun 22 15:27:23 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux ; Base docker image is : huggingface/transformers-pytorch-cpu
- Python version: 3.6.9
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4099/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4099/timeline | null | completed | null | null | false | [
"Hi @andreybond, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to able to reproduce your issue:\r\n```python\r\nIn [4]: from datasets import load_dataset\r\n ...: datasets = load_dataset(\"nielsr/XFUN\", \"xfun.ja\")\r\n\r\nIn [5]: datasets\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'input_ids', 'bbox', 'labels', 'image', 'entities', 'relations'],\r\n num_rows: 194\r\n })\r\n validation: Dataset({\r\n features: ['id', 'input_ids', 'bbox', 'labels', 'image', 'entities', 'relations'],\r\n num_rows: 71\r\n })\r\n})\r\n```\r\n\r\nThe only reason I can imagine this issue may arise is if your default encoding is not \"UTF-8\" (and it is ASCII instead). This is usually the case on Windows machines; but you say your environment is a Linux machine. Maybe you change your machine default encoding?\r\n\r\nCould you please check this?\r\n```python\r\nIn [6]: import sys\r\n\r\nIn [7]: sys.getdefaultencoding()\r\nOut[7]: 'utf-8'\r\n```",
"I opened a PR in the original dataset loading script:\r\n- microsoft/unilm#677\r\n\r\nand fixed the corresponding dataset script on the Hub:\r\n- https://huggingface.co/datasets/nielsr/XFUN/commit/73ba5e026621e05fb756ae0f267eb49971f70ebd",
"import sys\r\nsys.getdefaultencoding()\r\n\r\nreturned: 'utf-8'\r\n\r\n---------------------\r\n\r\nI've just cloned master branch - your fix works! Thank you!"
] |
https://api.github.com/repos/huggingface/datasets/issues/5978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5978/comments | https://api.github.com/repos/huggingface/datasets/issues/5978/events | https://github.com/huggingface/datasets/pull/5978 | 1,770,187,053 | PR_kwDODunzps5Tru2_ | 5,978 | Release: 2.13.1 | [] | closed | false | null | 4 | 2023-06-22T18:23:11Z | 2023-06-22T18:40:24Z | 2023-06-22T18:30:16Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5978/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5978/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5978.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5978",
"merged_at": "2023-06-22T18:30:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5978.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5978"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006173 / 0.011353 (-0.005180) | 0.003773 / 0.011008 (-0.007235) | 0.099499 / 0.038508 (0.060991) | 0.037918 / 0.023109 (0.014809) | 0.321329 / 0.275898 (0.045431) | 0.379739 / 0.323480 (0.056259) | 0.004664 / 0.007986 (-0.003322) | 0.002943 / 0.004328 (-0.001385) | 0.077759 / 0.004250 (0.073509) | 0.055271 / 0.037052 (0.018219) | 0.329428 / 0.258489 (0.070939) | 0.378731 / 0.293841 (0.084890) | 0.027737 / 0.128546 (-0.100810) | 0.008566 / 0.075646 (-0.067081) | 0.313220 / 0.419271 (-0.106052) | 0.047101 / 0.043533 (0.003568) | 0.316211 / 0.255139 (0.061072) | 0.341826 / 0.283200 (0.058626) | 0.020838 / 0.141683 (-0.120845) | 1.550064 / 1.452155 (0.097909) | 1.706518 / 1.492716 (0.213801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203093 / 0.018006 (0.185087) | 0.425345 / 0.000490 (0.424856) | 0.004800 / 0.000200 (0.004600) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024590 / 0.037411 (-0.012821) | 0.098115 / 0.014526 (0.083589) | 0.108274 / 0.176557 (-0.068282) | 0.170804 / 0.737135 (-0.566332) | 0.110560 / 0.296338 (-0.185778) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425251 / 0.215209 (0.210042) | 4.239075 / 2.077655 (2.161421) | 1.955601 / 1.504120 (0.451481) | 1.774796 / 1.541195 (0.233602) | 1.826641 / 1.468490 (0.358150) | 0.558777 / 4.584777 (-4.026000) | 3.361697 / 3.745712 (-0.384015) | 1.764468 / 5.269862 (-3.505394) | 1.032280 / 4.565676 (-3.533396) | 0.067872 / 0.424275 (-0.356403) | 0.010998 / 0.007607 (0.003391) | 0.525682 / 0.226044 (0.299637) | 5.254356 / 2.268929 (2.985427) | 2.384332 / 55.444624 (-53.060292) | 2.045578 / 6.876477 (-4.830898) | 2.170914 / 2.142072 (0.028841) | 0.674782 / 4.805227 (-4.130445) | 0.135351 / 6.500664 (-6.365314) | 0.066591 / 0.075469 (-0.008878) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.209181 / 1.841788 (-0.632606) | 14.044518 / 8.074308 (5.970210) | 13.184705 / 10.191392 (2.993313) | 0.130836 / 0.680424 (-0.549588) | 0.016582 / 0.534201 (-0.517619) | 0.360005 / 0.579283 (-0.219279) | 0.379519 / 0.434364 (-0.054845) | 0.422174 / 0.540337 (-0.118164) | 0.515546 / 1.386936 (-0.871390) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006293 / 0.011353 (-0.005060) | 0.003784 / 0.011008 (-0.007224) | 0.079248 / 0.038508 (0.040739) | 0.038452 / 0.023109 (0.015343) | 0.444727 / 0.275898 (0.168829) | 0.500535 / 0.323480 (0.177055) | 0.003455 / 0.007986 (-0.004531) | 0.002873 / 0.004328 (-0.001455) | 0.077439 / 0.004250 (0.073189) | 0.047855 / 0.037052 (0.010803) | 0.448049 / 0.258489 (0.189560) | 0.509517 / 0.293841 (0.215676) | 0.028359 / 0.128546 (-0.100188) | 0.008503 / 0.075646 (-0.067143) | 0.084961 / 0.419271 (-0.334310) | 0.042880 / 0.043533 (-0.000653) | 0.436628 / 0.255139 (0.181489) | 0.456574 / 0.283200 (0.173375) | 0.019539 / 0.141683 (-0.122144) | 1.561273 / 1.452155 (0.109118) | 1.572018 / 1.492716 (0.079301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230250 / 0.018006 (0.212244) | 0.415189 / 0.000490 (0.414700) | 0.003213 / 0.000200 (0.003013) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025541 / 0.037411 (-0.011871) | 0.102326 / 0.014526 (0.087800) | 0.110258 / 0.176557 (-0.066298) | 0.162488 / 0.737135 (-0.574647) | 0.112782 / 0.296338 (-0.183556) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457936 / 0.215209 (0.242727) | 4.581503 / 2.077655 (2.503848) | 2.237659 / 1.504120 (0.733540) | 2.029960 / 1.541195 (0.488765) | 2.082911 / 1.468490 (0.614421) | 0.556485 / 4.584777 (-4.028292) | 3.384418 / 3.745712 (-0.361295) | 1.748809 / 5.269862 (-3.521053) | 1.034759 / 4.565676 (-3.530917) | 0.067500 / 0.424275 (-0.356776) | 0.011425 / 0.007607 (0.003818) | 0.561340 / 0.226044 (0.335295) | 5.623629 / 2.268929 (3.354701) | 2.733587 / 55.444624 (-52.711038) | 2.401578 / 6.876477 (-4.474899) | 2.524569 / 2.142072 (0.382496) | 0.673170 / 4.805227 (-4.132057) | 0.136681 / 6.500664 (-6.363983) | 0.068060 / 0.075469 (-0.007409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.318651 / 1.841788 (-0.523137) | 14.362123 / 8.074308 (6.287815) | 14.385964 / 10.191392 (4.194572) | 0.149914 / 0.680424 (-0.530510) | 0.016877 / 0.534201 (-0.517324) | 0.358406 / 0.579283 (-0.220877) | 0.394349 / 0.434364 (-0.040015) | 0.422471 / 0.540337 (-0.117866) | 0.513807 / 1.386936 (-0.873129) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006272 / 0.011353 (-0.005080) | 0.003903 / 0.011008 (-0.007105) | 0.100180 / 0.038508 (0.061672) | 0.037799 / 0.023109 (0.014690) | 0.385627 / 0.275898 (0.109729) | 0.446518 / 0.323480 (0.123038) | 0.004811 / 0.007986 (-0.003175) | 0.003032 / 0.004328 (-0.001296) | 0.077063 / 0.004250 (0.072812) | 0.055564 / 0.037052 (0.018512) | 0.397346 / 0.258489 (0.138857) | 0.443242 / 0.293841 (0.149401) | 0.027904 / 0.128546 (-0.100642) | 0.008386 / 0.075646 (-0.067260) | 0.315013 / 0.419271 (-0.104259) | 0.047943 / 0.043533 (0.004410) | 0.378443 / 0.255139 (0.123304) | 0.411472 / 0.283200 (0.128272) | 0.020465 / 0.141683 (-0.121218) | 1.526594 / 1.452155 (0.074439) | 1.547018 / 1.492716 (0.054301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219377 / 0.018006 (0.201370) | 0.430254 / 0.000490 (0.429764) | 0.003218 / 0.000200 (0.003018) | 0.000072 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023667 / 0.037411 (-0.013744) | 0.099143 / 0.014526 (0.084617) | 0.106044 / 0.176557 (-0.070513) | 0.166186 / 0.737135 (-0.570949) | 0.108736 / 0.296338 (-0.187603) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437971 / 0.215209 (0.222762) | 4.363675 / 2.077655 (2.286021) | 2.011993 / 1.504120 (0.507873) | 1.845189 / 1.541195 (0.303994) | 1.831848 / 1.468490 (0.363358) | 0.562402 / 4.584777 (-4.022375) | 3.365259 / 3.745712 (-0.380453) | 1.781491 / 5.269862 (-3.488371) | 1.023454 / 4.565676 (-3.542223) | 0.067857 / 0.424275 (-0.356418) | 0.011076 / 0.007607 (0.003469) | 0.532267 / 0.226044 (0.306223) | 5.340344 / 2.268929 (3.071415) | 2.388649 / 55.444624 (-53.055976) | 2.055373 / 6.876477 (-4.821104) | 2.205047 / 2.142072 (0.062975) | 0.672909 / 4.805227 (-4.132318) | 0.135244 / 6.500664 (-6.365420) | 0.066184 / 0.075469 (-0.009285) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206838 / 1.841788 (-0.634950) | 13.967075 / 8.074308 (5.892767) | 13.143971 / 10.191392 (2.952579) | 0.143991 / 0.680424 (-0.536433) | 0.016673 / 0.534201 (-0.517527) | 0.376180 / 0.579283 (-0.203103) | 0.386550 / 0.434364 (-0.047814) | 0.440590 / 0.540337 (-0.099747) | 0.529974 / 1.386936 (-0.856962) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006299 / 0.011353 (-0.005054) | 0.003784 / 0.011008 (-0.007224) | 0.077875 / 0.038508 (0.039367) | 0.038689 / 0.023109 (0.015580) | 0.421684 / 0.275898 (0.145786) | 0.472649 / 0.323480 (0.149169) | 0.003570 / 0.007986 (-0.004415) | 0.004448 / 0.004328 (0.000120) | 0.077867 / 0.004250 (0.073616) | 0.049514 / 0.037052 (0.012462) | 0.375983 / 0.258489 (0.117494) | 0.470632 / 0.293841 (0.176791) | 0.028238 / 0.128546 (-0.100308) | 0.008462 / 0.075646 (-0.067185) | 0.082452 / 0.419271 (-0.336819) | 0.043617 / 0.043533 (0.000084) | 0.400874 / 0.255139 (0.145735) | 0.426191 / 0.283200 (0.142992) | 0.020602 / 0.141683 (-0.121081) | 1.567658 / 1.452155 (0.115504) | 1.572610 / 1.492716 (0.079893) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246144 / 0.018006 (0.228138) | 0.419402 / 0.000490 (0.418913) | 0.001691 / 0.000200 (0.001491) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026105 / 0.037411 (-0.011306) | 0.104734 / 0.014526 (0.090208) | 0.110257 / 0.176557 (-0.066300) | 0.161429 / 0.737135 (-0.575706) | 0.114367 / 0.296338 (-0.181972) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.453352 / 0.215209 (0.238143) | 4.537924 / 2.077655 (2.460269) | 2.196193 / 1.504120 (0.692073) | 2.002087 / 1.541195 (0.460892) | 2.041722 / 1.468490 (0.573231) | 0.561643 / 4.584777 (-4.023134) | 3.449108 / 3.745712 (-0.296605) | 2.862800 / 5.269862 (-2.407062) | 1.387895 / 4.565676 (-3.177782) | 0.068076 / 0.424275 (-0.356199) | 0.011568 / 0.007607 (0.003961) | 0.559279 / 0.226044 (0.333235) | 5.598738 / 2.268929 (3.329809) | 2.676649 / 55.444624 (-52.767975) | 2.334588 / 6.876477 (-4.541889) | 2.376215 / 2.142072 (0.234142) | 0.673109 / 4.805227 (-4.132118) | 0.137587 / 6.500664 (-6.363077) | 0.069131 / 0.075469 (-0.006338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307332 / 1.841788 (-0.534456) | 14.536036 / 8.074308 (6.461728) | 14.173734 / 10.191392 (3.982342) | 0.145143 / 0.680424 (-0.535281) | 0.016662 / 0.534201 (-0.517539) | 0.366901 / 0.579283 (-0.212383) | 0.394498 / 0.434364 (-0.039866) | 0.430546 / 0.540337 (-0.109792) | 0.518950 / 1.386936 (-0.867986) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008122 / 0.011353 (-0.003231) | 0.005585 / 0.011008 (-0.005424) | 0.121219 / 0.038508 (0.082711) | 0.047616 / 0.023109 (0.024507) | 0.440576 / 0.275898 (0.164678) | 0.491053 / 0.323480 (0.167573) | 0.004774 / 0.007986 (-0.003211) | 0.006758 / 0.004328 (0.002430) | 0.103852 / 0.004250 (0.099602) | 0.071560 / 0.037052 (0.034508) | 0.463107 / 0.258489 (0.204618) | 0.516904 / 0.293841 (0.223063) | 0.048052 / 0.128546 (-0.080494) | 0.013679 / 0.075646 (-0.061968) | 0.428383 / 0.419271 (0.009112) | 0.069468 / 0.043533 (0.025936) | 0.432593 / 0.255139 (0.177454) | 0.471810 / 0.283200 (0.188611) | 0.037541 / 0.141683 (-0.104142) | 1.823490 / 1.452155 (0.371335) | 1.922558 / 1.492716 (0.429842) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252315 / 0.018006 (0.234309) | 0.541757 / 0.000490 (0.541267) | 0.000373 / 0.000200 (0.000173) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030361 / 0.037411 (-0.007050) | 0.125928 / 0.014526 (0.111402) | 0.145102 / 0.176557 (-0.031455) | 0.209798 / 0.737135 (-0.527337) | 0.147349 / 0.296338 (-0.148990) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.627554 / 0.215209 (0.412345) | 5.917422 / 2.077655 (3.839767) | 2.491083 / 1.504120 (0.986963) | 2.147078 / 1.541195 (0.605883) | 2.167511 / 1.468490 (0.699021) | 0.903061 / 4.584777 (-3.681716) | 5.518537 / 3.745712 (1.772825) | 2.654348 / 5.269862 (-2.615514) | 1.645121 / 4.565676 (-2.920556) | 0.103782 / 0.424275 (-0.320493) | 0.013048 / 0.007607 (0.005441) | 0.756732 / 0.226044 (0.530687) | 7.622873 / 2.268929 (5.353945) | 3.122689 / 55.444624 (-52.321936) | 2.537735 / 6.876477 (-4.338742) | 2.640090 / 2.142072 (0.498018) | 1.128635 / 4.805227 (-3.676593) | 0.228089 / 6.500664 (-6.272575) | 0.086207 / 0.075469 (0.010738) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.561591 / 1.841788 (-0.280197) | 18.110299 / 8.074308 (10.035991) | 20.718017 / 10.191392 (10.526625) | 0.225741 / 0.680424 (-0.454682) | 0.031738 / 0.534201 (-0.502463) | 0.530789 / 0.579283 (-0.048495) | 0.607364 / 0.434364 (0.173000) | 0.581593 / 0.540337 (0.041256) | 0.726033 / 1.386936 (-0.660903) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009323 / 0.011353 (-0.002030) | 0.005360 / 0.011008 (-0.005649) | 0.103608 / 0.038508 (0.065100) | 0.050158 / 0.023109 (0.027049) | 0.499906 / 0.275898 (0.224008) | 0.561005 / 0.323480 (0.237525) | 0.005093 / 0.007986 (-0.002892) | 0.008285 / 0.004328 (0.003956) | 0.103446 / 0.004250 (0.099196) | 0.061478 / 0.037052 (0.024426) | 0.494016 / 0.258489 (0.235527) | 0.537550 / 0.293841 (0.243709) | 0.048829 / 0.128546 (-0.079717) | 0.017032 / 0.075646 (-0.058614) | 0.107748 / 0.419271 (-0.311524) | 0.065607 / 0.043533 (0.022074) | 0.488709 / 0.255139 (0.233570) | 0.512023 / 0.283200 (0.228823) | 0.032067 / 0.141683 (-0.109616) | 1.907585 / 1.452155 (0.455431) | 1.960994 / 1.492716 (0.468278) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278378 / 0.018006 (0.260371) | 0.551474 / 0.000490 (0.550985) | 0.006886 / 0.000200 (0.006686) | 0.000106 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030674 / 0.037411 (-0.006737) | 0.135179 / 0.014526 (0.120654) | 0.133703 / 0.176557 (-0.042853) | 0.198923 / 0.737135 (-0.538212) | 0.155108 / 0.296338 (-0.141231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.690566 / 0.215209 (0.475357) | 6.789594 / 2.077655 (4.711940) | 2.940668 / 1.504120 (1.436549) | 2.562431 / 1.541195 (1.021236) | 2.554232 / 1.468490 (1.085742) | 0.888470 / 4.584777 (-3.696307) | 5.672318 / 3.745712 (1.926606) | 2.741626 / 5.269862 (-2.528236) | 1.818336 / 4.565676 (-2.747340) | 0.110434 / 0.424275 (-0.313841) | 0.014114 / 0.007607 (0.006507) | 0.830632 / 0.226044 (0.604588) | 8.270787 / 2.268929 (6.001859) | 3.723486 / 55.444624 (-51.721139) | 2.993671 / 6.876477 (-3.882806) | 2.918273 / 2.142072 (0.776201) | 1.105337 / 4.805227 (-3.699891) | 0.222976 / 6.500664 (-6.277688) | 0.085290 / 0.075469 (0.009820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.816027 / 1.841788 (-0.025760) | 18.496850 / 8.074308 (10.422541) | 20.457032 / 10.191392 (10.265640) | 0.243533 / 0.680424 (-0.436891) | 0.027044 / 0.534201 (-0.507157) | 0.500752 / 0.579283 (-0.078531) | 0.620963 / 0.434364 (0.186599) | 0.607995 / 0.540337 (0.067658) | 0.722915 / 1.386936 (-0.664021) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5294/comments | https://api.github.com/repos/huggingface/datasets/issues/5294/events | https://github.com/huggingface/datasets/pull/5294 | 1,463,679,582 | PR_kwDODunzps5DqgLW | 5,294 | Support streaming datasets with pathlib.Path.with_suffix | [] | closed | false | null | 1 | 2022-11-24T18:04:38Z | 2022-11-29T07:09:08Z | 2022-11-29T07:06:32Z | null | This PR extends the support in streaming mode for datasets that use `pathlib.Path.with_suffix`.
Fix #5293. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5294/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5294/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5294.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5294",
"merged_at": "2022-11-29T07:06:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5294.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5294"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4063/comments | https://api.github.com/repos/huggingface/datasets/issues/4063/events | https://github.com/huggingface/datasets/pull/4063 | 1,186,611,368 | PR_kwDODunzps41UiDm | 4,063 | Increase max retries for GitHub metrics | [] | closed | false | null | 1 | 2022-03-30T15:12:48Z | 2022-03-31T14:42:52Z | 2022-03-31T14:37:47Z | null | As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub metrics.
Related to:
- #3134
Also related to:
- #4059 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4063/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4063/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4063.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4063",
"merged_at": "2022-03-31T14:37:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4063.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4063"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3468/comments | https://api.github.com/repos/huggingface/datasets/issues/3468/events | https://github.com/huggingface/datasets/pull/3468 | 1,085,871,301 | PR_kwDODunzps4wIozO | 3,468 | Add COCO dataset | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 7 | 2021-12-21T14:07:50Z | 2022-10-03T09:38:07Z | 2022-10-03T09:36:08Z | null | This PR adds the MS COCO dataset. Compared to the [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/object_detection/coco.py) script, this implementation adds 8 additional configs to cover the tasks other than object detection.
Some notes:
* the data exposed by TFDS is contained in the `2014`, `2015`, `2017` and `2017_panoptic_segmentation` configs here
* I've updated `encode_nested_example` for easier handling of missing values (cc @lhoestq @albertvillanova; will add tests if you are OK with the changes in `features.py`)
* this implementation should fix https://github.com/huggingface/datasets/pull/3377#issuecomment-985559427
TODOs:
- [x] dataset card
- [ ] dummy data
cc @merveenoyan
Closes #2526 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3468/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3468/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/3468.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3468",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3468.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3468"
} | true | [
"The CI failures other than a missing dummy data file and missing fields in the card are unrelated to this PR. ",
"Thanks a lot for this great work and fixing TFDS based script @mariosasko 🤗 will generate the dummy dataset and write the model card tomorrow!",
"@mariosasko I added the dataset card, I'm on the dummy data rn. ",
"@merveenoyan Let me know if you need any help with the dummy data.\r\n\r\nI plan to split the current script/dataset into 4 smaller scripts/datasets to make sure they are properly indexed by Papers With Code later on. In this format:\r\n* the `*_image_captioning` configs will form the [COCO Captions](https://paperswithcode.com/sota/image-captioning-on-coco-captions) dataset (also present in TFDS, but only the 2017 version)\r\n* the `stuff_segmentation` config will form the [COCO Stuff](https://paperswithcode.com/dataset/coco-stuff) dataset\r\n* the `desnepose` config will form the [DensePose-COCO](https://paperswithcode.com/dataset/densepose) dataset\r\n* the rest will be [COCO](https://paperswithcode.com/dataset/coco) (+ will add the `minival` and the `valminusminival` splits to COCO 2014)\r\n\r\nAlso, if I find the time, I'll add preprocessing examples that rely on `pycocotools` to the README files.",
"@mariosasko I feel like we can just push main COCO and add Captions + Stuff later, WDYT?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for your contribution, @mariosasko and @merveenoyan. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] |
https://api.github.com/repos/huggingface/datasets/issues/2941 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2941/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2941/comments | https://api.github.com/repos/huggingface/datasets/issues/2941/events | https://github.com/huggingface/datasets/issues/2941 | 1,000,000,711 | I_kwDODunzps47mszH | 2,941 | OSCAR unshuffled_original_ko: NonMatchingSplitsSizesError | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | open | false | null | 1 | 2021-09-18T10:39:13Z | 2022-01-19T14:10:07Z | null | null | ## Describe the bug
Cannot download OSCAR `unshuffled_original_ko` due to `NonMatchingSplitsSizesError`.
## Steps to reproduce the bug
```python
>>> dataset = datasets.load_dataset('oscar', 'unshuffled_original_ko')
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=25292102197, num_examples=7345075, dataset_name='oscar'), 'recorded': SplitInfo(name='train', num_bytes=25284578514, num_examples=7344907, dataset_name='oscar')}]
```
## Expected results
Loading is successful.
## Actual results
Loading throws above error.
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2941/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2941/timeline | null | null | null | null | false | [
"I tried `unshuffled_original_da` and it is also not working"
] |
https://api.github.com/repos/huggingface/datasets/issues/1931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1931/comments | https://api.github.com/repos/huggingface/datasets/issues/1931/events | https://github.com/huggingface/datasets/pull/1931 | 814,225,074 | MDExOlB1bGxSZXF1ZXN0NTc4MjQ4NTA5 | 1,931 | add m_lama (multilingual lama) dataset | [] | closed | false | null | 3 | 2021-02-23T08:11:57Z | 2021-03-01T10:01:03Z | 2021-03-01T10:01:03Z | null | Add a multilingual (machine translated and automatically generated) version of the LAMA benchmark. For details see the paper https://arxiv.org/pdf/2102.00894.pdf | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1931/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1931/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1931.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1931",
"merged_at": "2021-03-01T10:01:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1931.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1931"
} | true | [
"Hi, it seems I am somewhat stuck here. The failed test `ci/circleci: run_dataset_script_tests_pyarrow_1_WIN` seems to be caused by some broken connection (`ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host`). Any help on this is appreciated. \r\n\r\nEdit: Seems to be resolved now.",
"I guess the `dummy_data.zip` is too large. I can reduce the languages that are contained there, but when testing it, it obviously throws an error, as not all files can be found. I guess I can either i) change the default value regarding which languages are loaded or ii) let the `_generate_examples` silently skip any language for which it cannot find files. Both solutions are not really pretty - is there another way around this?",
"Thanks for the review and the constructive comments :) ! I tried to address them, and reduced the number of lines in the dummy data to 1 to reduce its size. "
] |
https://api.github.com/repos/huggingface/datasets/issues/2608 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2608/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2608/comments | https://api.github.com/repos/huggingface/datasets/issues/2608/events | https://github.com/huggingface/datasets/pull/2608 | 938,897,626 | MDExOlB1bGxSZXF1ZXN0Njg1MjAwMDYw | 2,608 | Support streaming JSON files | [] | closed | false | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
} | 0 | 2021-07-07T13:30:22Z | 2021-07-12T14:12:31Z | 2021-07-08T16:08:41Z | null | Use open in JSON dataset builder, so that it can be patched with xopen for streaming.
Close #2607. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2608/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2608/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2608.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2608",
"merged_at": "2021-07-08T16:08:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2608.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2608"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2121/comments | https://api.github.com/repos/huggingface/datasets/issues/2121/events | https://github.com/huggingface/datasets/pull/2121 | 842,148,633 | MDExOlB1bGxSZXF1ZXN0NjAxNzc4NDc4 | 2,121 | Add Validation For README | [] | closed | false | null | 7 | 2021-03-26T17:02:17Z | 2021-05-10T13:17:18Z | 2021-05-10T09:41:41Z | null | Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
```json
{
"name": "./datasets/fashion_mnist/README.md",
"attributes": "",
"subsections": [
{
"name": "Dataset Card for FashionMNIST",
"attributes": "",
"subsections": [
{
"name": "Table of Contents",
"attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)",
"subsections": []
},
{
"name": "Dataset Description",
"attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**",
"subsections": [
{
"name": "Dataset Summary",
"attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.",
"subsections": []
},
{
"name": "Supported Tasks and Leaderboards",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Languages",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Dataset Structure",
"attributes": "",
"subsections": [
{
"name": "Data Instances",
"attributes": "A data point comprises an image and its label.",
"subsections": []
},
{
"name": "Data Fields",
"attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |",
"subsections": []
},
{
"name": "Data Splits",
"attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.",
"subsections": []
}
]
},
{
"name": "Dataset Creation",
"attributes": "",
"subsections": [
{
"name": "Curation Rationale",
"attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.",
"subsections": []
},
{
"name": "Source Data",
"attributes": "",
"subsections": [
{
"name": "Initial Data Collection and Normalization",
"attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.",
"subsections": []
},
{
"name": "Who are the source image producers?",
"attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.",
"subsections": []
}
]
},
{
"name": "Annotations",
"attributes": "",
"subsections": [
{
"name": "Annotation process",
"attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.",
"subsections": []
},
{
"name": "Who are the annotators?",
"attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.",
"subsections": []
}
]
},
{
"name": "Personal and Sensitive Information",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Considerations for Using the Data",
"attributes": "",
"subsections": [
{
"name": "Social Impact of Dataset",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Discussion of Biases",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Other Known Limitations",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Additional Information",
"attributes": "",
"subsections": [
{
"name": "Dataset Curators",
"attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf",
"subsections": []
},
{
"name": "Licensing Information",
"attributes": "MIT Licence",
"subsections": []
},
{
"name": "Citation Information",
"attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}",
"subsections": []
},
{
"name": "Contributions",
"attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.",
"subsections": []
}
]
}
]
}
]
}
```
Thanks,
Gunjan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2121/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2121/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2121.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2121",
"merged_at": "2021-05-10T09:41:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2121.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2121"
} | true | [
"Good start! Here are some proposed next steps:\r\n- We want the Class structure to reflect the template - so the parser know what section titles to expect and when something has gone wrong\r\n- As a result, we don't need to parse the table of contents, since it will always be the same\r\n- For each section/subsection it would be cool to have a variable saying whether it's filled out or not (when it's either empty or has `[More Information Needed]`)\r\n- `attributes` should probably be `text`",
"@yjernite @lhoestq \r\n\r\nI have added basic validation checking in the class. It works based on a YAML string. The YAML string determines the expected structure and which text is to be checked. The `text` can be true or false showing whether the text has to be checked or not for emptiness. Similarly, each subsection is parsed recursively. I have used print statement currently so that all issues are shown.\r\n\r\nPlease let me know your thoughts.\r\n\r\nI haven't added a variable that keeps a track of whether the text is empty or not but it can be done easliy if required.",
"This looks like a good start !\r\nMaybe we can use a field named `allow_empty` instead of `text` ?\r\nAlso +1 for keeping track of empty texts\r\n\r\nDo you think you can have a way to collect all the validation fails of a readme and then raise an error showing all the failures instead of using print ?\r\n\r\nThen we can create a `tests/test_dataset_cards.py` test file to make sure all the readmes of the repo are valid !",
"Hi @lhoestq \r\n\r\nI have added changes accordingly. I prepared a list which stores all the errors and raises them at the end. I'm not sure if there is a better way.",
"Hi @lhoestq @yjernite \r\n\r\nPlease find the output for the existing READMEs here: http://p.ip.fi/2vYU\r\n\r\nThanks,\r\nGunjan",
"Hi @lhoestq\r\n\r\nI have added some basic tests, also have restructured `ReadMe` class slightly.\r\n\r\nThere is one print statement currently, I'm not sure how to remove it. Basically, I want to warn but not stop further validation. I can't append to a list because the `error_list` and `warning_list` are both only present in `validate` method, and this print is present in the `parse` method. This is done when someone has repeated a section multiple times. For e.g.:\r\n\r\n```markdown\r\n---\r\n---\r\n\r\n# Dataset Card for FashionMNIST\r\n## Dataset Description\r\n## Dataset Description\r\n```\r\n\r\nIn this case, I check for validation only in the latest entry.\r\n\r\nI can also raise an error (ideal case scenario), but still, it is in the `parse`. Should I add `error_lines` and `warning_lines` as instance variables? That would probably solve the issue.\r\n\r\nIn tests, I'm using a dummy YAML string for structure, we can also make it into a file but I feel that is not a hard requirement. Let me know your thoughts.\r\n\r\nI will add tests for `from_readme` as well.\r\n\r\nHowever, I would love to be able to check the exact message in the test when an error is raised. I checked a couple of methods but couldn't get it working. Let me know if you're aware of a way to do that.",
"Hi @lhoestq \r\n\r\nThanks for merging. :)\r\nThanks a lot to you and @yjernite for guiding me and helping me out.\r\n\r\nYes, I'll also use the next PR for combining the readme and tags validation. ^_^"
] |
https://api.github.com/repos/huggingface/datasets/issues/5180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5180/comments | https://api.github.com/repos/huggingface/datasets/issues/5180/events | https://github.com/huggingface/datasets/issues/5180 | 1,431,012,438 | I_kwDODunzps5VS4RW | 5,180 | An example or recommendations for creating large image datasets? | [] | open | false | null | 2 | 2022-11-01T07:38:38Z | 2022-11-02T10:17:11Z | null | null | I know that Apache Beam and `datasets` have [some connector utilities](https://huggingface.co/docs/datasets/beam). But it's a little unclear what we mean by "But if you want to run your own Beam pipeline with Dataflow, here is how:". What does that pipeline do?
As a user, I was wondering if we have this support for creating large image datasets. If so, we should mention that [here](https://huggingface.co/docs/datasets/image_dataset).
Cc @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5180/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5180/timeline | null | null | null | null | false | [
"The beam utilities allow to prepare a dataset as parquet in your cloud storage. From my perspective this CLI is not super easy to use, but we've been working on a new python API to prepare a dataset in your cloud storage:\r\n```python\r\nfrom datasets import load_dataset_builder\r\n\r\nbuilder = load_dataset_builder(\"c4\", \"en\")\r\nbuilder.download_and_prepapre(\"s3://my-bucket/c4\", file_format=\"parquet\")\r\n```\r\n\r\nAnd to use Beam you can do:\r\n```python\r\nbeam_runner = ... # one of \"SparkRunner\", \"DataFlowRunner\", \"DirectRunner\", etc.\r\nbeam_options = ...\r\n\r\nbuilder.download_and_prepapre(\r\n \"s3://my-bucket/c4\",\r\n file_format=\"parquet\",\r\n beam_runner=beam_runner,\r\n beam_options=beam_options\r\n)\r\n```\r\n\r\nThough Beam can be used ONLY if there is a dataset script based on the `BeamBasedBuilder` right now - it doesn't work on an arbitrary dataset (see [wikipedia.py](https://huggingface.co/datasets/wikipedia/blob/main/wikipedia.py) for example).",
"Thanks! \r\n\r\nWould be nice to have something similar for creating large image datasets. "
] |
https://api.github.com/repos/huggingface/datasets/issues/1744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1744/comments | https://api.github.com/repos/huggingface/datasets/issues/1744/events | https://github.com/huggingface/datasets/pull/1744 | 787,649,811 | MDExOlB1bGxSZXF1ZXN0NTU2MzA0MjU4 | 1,744 | Add missing "brief" entries to reuters | [] | closed | false | null | 2 | 2021-01-17T07:58:49Z | 2021-01-18T11:26:09Z | 2021-01-18T11:26:09Z | null | This brings the number of examples for ModApte to match the stated `Training set (9,603 docs)...Test Set (3,299 docs)` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1744/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1744/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1744.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1744",
"merged_at": "2021-01-18T11:26:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1744.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1744"
} | true | [
"@lhoestq I ran `make style` but CI code quality still failing and I don't have access to logs",
"It's also likely that due to the previous placement of the field initialization, much of the data about topics etc was simply wrong and carried over from previous entries. Model scores seem to improve significantly with this PR."
] |
https://api.github.com/repos/huggingface/datasets/issues/4050 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4050/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4050/comments | https://api.github.com/repos/huggingface/datasets/issues/4050/events | https://github.com/huggingface/datasets/pull/4050 | 1,184,346,501 | PR_kwDODunzps41NAMF | 4,050 | Add RVL-CDIP dataset | [] | closed | false | null | 14 | 2022-03-29T06:00:02Z | 2022-04-22T09:55:07Z | 2022-04-21T17:15:41Z | null | Resolves #2762
Dataset Request : Add RVL-CDIP dataset [#2762](https://github.com/huggingface/datasets/issues/2762)
This PR adds the RVL-CDIP dataset.
The dataset contains Google Drive link for download and wasn't getting downloaded automatically, so I have provided manual_download_instructions.
- I have added the dummy_data.zip as well.
Needed inputs on how I can run the real data and the dummy data tests for datasets with manual download ?
Inputs and suggestions for improvement are welcome. Thank you. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4050/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4050/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4050.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4050",
"merged_at": "2022-04-21T17:15:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4050.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4050"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot for inputs. I'll use the URL suggested and check.\r\n\r\n> we need to implement the streamable (can't use os.path.join) and the non-streamable versions of _generate_examples.\r\n\r\nSure. I will check the reference and try this out, will get back to you if I face any issues.\r\n\r\n> The labels-only data file URL doesn't work for me, so feel free to ask the authors whether they are OK with us hosting the file on the Hub/S3 (to speed up the streamable version)\r\n\r\nJust checked. The author (Adam Harley) has responded positively and allowed us to host the file. Do I share the file with you for hosting it on Hub/S3 ?",
"> Just checked. The author (Adam Harley) has responded positively and allowed us to host the file. Do I share the file with you for hosting it on Hub/S3 ?\r\n\r\nYes, feel free to e-mail me the file. Then I'll create a repo under my namespace and push the file there. We run a GH action on a GH dataset after merging to create its repo on the Hub, so after this PR is merged, I'll push the file to the \"official\" namespace and update the download link.",
"> You can use this URL to avoid manual download: https://drive.google.com/uc?export=download&id=0Bz1dfcnrpXM-MUt4cHNzUEFXcmc\r\n\r\nFor some reason, the direct download doesn't seem to work for me even with this URL. \r\n```\r\nDownloading and preparing dataset rvl_cdip/default to ~/.cache/huggingface/datasets/rvl_cdip/default/1.0.0/ea152149e06310d60a9ef3c3020199dd4780bb952a773ba5aac6b57d59f12628...\r\nDownloading data files: 100%|█████████████████████████████████████████████████████| 1/1 [00:00<00:00, 6307.22it/s]\r\n{'rvl-cdip': '~/.cache/huggingface/datasets/downloads/07ef956a33750078d570d76fefe9fed49f7dc32ecf6e872d690de11e66bbe869'}\r\n```\r\nAnd this directory does not exist. Am I doing something wrong ?\r\nTo verify, I tried using [gdown](https://github.com/wkentaro/gdown) for the above URL, we get the following : \r\n```\r\nAccess denied with the following error:\r\n\r\n Cannot retrieve the public link of the file. You may need to change\r\n the permission to 'Anyone with the link', or have had many accesses. \r\n\r\nYou may still be able to access the file from the browser:\r\n```\r\n----\r\n\r\n> Yes, feel free to e-mail me the file. Then I'll create a repo under my namespace and push the file there. We run a GH action on a GH dataset after merging to create its repo on the Hub, so after this PR is merged, I'll push the file to the \"official\" namespace and update the download link.\r\n\r\nGot it. I've sent you an email with the file. Thank you.",
"Actually this URL works for direct download :\r\n`https://drive.google.com/uc?export=download&confirm=pbef&id=0Bz1dfcnrpXM-MUt4cHNzUEFXcmc`\r\nRef : https://github.com/wkentaro/gdown/issues/146#issuecomment-1042382215\r\n\r\nI'm working on the streamable versions of _generate_examples as well, will update you regarding this.",
"Google Drive is a tricky host, and it's easy to exceed daily download quota limits, so if we are allowed to host the `rvl-cdip.tar.gz` file, I can push it to the Hub.",
"Just checked, the authors have agreed. He mentioned that he had complaints about the GDrive link.\r\nYou can push it to the Hub and share the link. :)",
"I have added :\r\n- streaming support for rvl-cdip.tar.gz file. [ Need to test this ]\r\n\r\nIs it possible for you to upload the train.txt, test.txt, val.txt files separately to the Hub instead of labels_only.tar.gz file.\r\nCurrently during the tests in stream mode, we get : \r\n`NotImplementedError: Extraction protocol for TAR archives like 'https://huggingface.co/datasets/mariosasko/rvl_cdip/resolve/main/labels_only.tar.gz' is not implemented in streaming mode. Please use dl_manager.iter_archive instead.`\r\nIf the label files are present as .txt files then we can directly use dl_manager.download.\r\n\r\n\r\n",
"The rvl-cdip.tar.gz archive and txt files with the labels are on the Hub!",
"- Added 🤗 Hub download links.\r\n- streamable and non-streamable versions of _generate_examples.\r\n- Updated dummy data, both real and dummy dataset tests have passed.\r\n\r\n",
"I've removed the extraction of the archive file locally as suggested. Let me know if any other changes are required. :)",
"The check for **Update Hub repositories / update-hub-repositories** has failed.\r\n\r\n> https://github.com/huggingface/datasets/runs/6116502392?check_suite_focus=true\r\n\r\n",
"Hi ! Thanks for reporting ;) yes this CI job has been failing for a few days. I'm working on fixing it, and I'm manually running it on my side in the meantime",
"Great. :D Thank you @lhoestq "
] |
https://api.github.com/repos/huggingface/datasets/issues/4033 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4033/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4033/comments | https://api.github.com/repos/huggingface/datasets/issues/4033/events | https://github.com/huggingface/datasets/pull/4033 | 1,182,984,445 | PR_kwDODunzps41Ie6w | 4,033 | Fix checksum error in cats_vs_dogs dataset | [] | closed | false | null | 1 | 2022-03-28T07:01:25Z | 2022-03-28T07:49:39Z | 2022-03-28T07:44:24Z | null | Recent PR updated the metadata JSON file of cats_vs_dogs dataset:
- #3878
However, that new JSON file contains a None checksum.
This PR fixes it.
Fix #4032. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4033/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4033/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4033.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4033",
"merged_at": "2022-03-28T07:44:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4033.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4033"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3262 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3262/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3262/comments | https://api.github.com/repos/huggingface/datasets/issues/3262/events | https://github.com/huggingface/datasets/pull/3262 | 1,052,455,082 | PR_kwDODunzps4uej4t | 3,262 | asserts replaced with exception for image classification task, csv, json | [] | closed | false | null | 0 | 2021-11-12T22:34:59Z | 2021-11-15T11:08:37Z | 2021-11-15T11:08:37Z | null | Fixes for csv, json in io module and image_classification task with tests referenced in https://github.com/huggingface/datasets/issues/3171 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3262/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3262/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3262.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3262",
"merged_at": "2021-11-15T11:08:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3262.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3262"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3892/comments | https://api.github.com/repos/huggingface/datasets/issues/3892/events | https://github.com/huggingface/datasets/pull/3892 | 1,166,227,003 | PR_kwDODunzps40ShYB | 3,892 | Fix CLI test checksums | [] | closed | false | null | 4 | 2022-03-11T10:04:04Z | 2022-03-15T12:28:24Z | 2022-03-15T12:28:23Z | null | Previous PR:
- #3796
introduced a side effect: `datasets-cli test` generates `dataset_infos.json` with `None` checksum values.
See:
- #3805
This PR introduces a way for `datasets-cli test` to force to record infos, even if `verify_infos=False`
Close #3848.
CC: @craffel | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3892/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3892/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3892.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3892",
"merged_at": "2022-03-15T12:28:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3892.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3892"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3892). All of your documentation changes will be reflected on that endpoint.",
"Feel free to merge if it's good for you :)",
"I've added a test @lhoestq. Once all green, I'll merge. ",
"Last failing tests do not have nothing to do with this PR."
] |
https://api.github.com/repos/huggingface/datasets/issues/4447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4447/comments | https://api.github.com/repos/huggingface/datasets/issues/4447/events | https://github.com/huggingface/datasets/pull/4447 | 1,260,041,805 | PR_kwDODunzps45E4A- | 4,447 | Minor fixes/improvements in `scene_parse_150` card | [] | closed | false | null | 1 | 2022-06-03T15:22:34Z | 2022-06-06T15:50:25Z | 2022-06-06T15:41:37Z | null | Add `paperswithcode_id` and fix some links in the `scene_parse_150` card. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4447/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4447/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4447.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4447",
"merged_at": "2022-06-06T15:41:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4447.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4447"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1565 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1565/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1565/comments | https://api.github.com/repos/huggingface/datasets/issues/1565/events | https://github.com/huggingface/datasets/pull/1565 | 766,333,940 | MDExOlB1bGxSZXF1ZXN0NTM5Mzg2MzEx | 1,565 | Create README.md | [] | closed | false | null | 5 | 2020-12-14T11:40:23Z | 2021-03-25T14:01:49Z | 2021-03-25T14:01:49Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1565/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1565/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1565.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1565",
"merged_at": "2021-03-25T14:01:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1565.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1565"
} | true | [
"@ManuelFay thanks you so much for adding a dataset card, this is such a cool contribution!\r\n\r\nThis looks like it uses an old template for the card we've moved things around a bit and we have an app you should be using to get the tags and the structure of the Data Fields paragraph :) Would you mind moving your text to the newer format (we're also asking contributors to keep the full template structure, even if some sections still have [More Information Needed] for the time being)\r\n\r\nHere's the link to the instructions:\r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nOut of curiosity, what was your landing point for filling out the card? Did you follow the \"Update on Github\" when navigating the datasets? Trying to make the instructions as clear as possible :) ",
"@yjernite \r\n\r\nPerfect, I'll follow the instructions when I have a bit more time tomorrow ! I was actually browsing the new contributions after the dataset sprint and realized most of the \"old\" datasets were not tagged, so I just copied and pasted the readme from another dataset and was not aware there was precise instructions... Will fix !\r\n\r\nBTW, amazing job with the retriBert work, I used the contrastive + in-batch negative quite a bit for various projects. Probably neither the time nor place to talk about that but I was curious as to why, in your original work, you prefered using a simple projection in the last layer to differentiate the question vs answer embedding, rather than allowing for bias in the dense layer or even just to fine-tune 2 different embedders for question + answer ? ",
"Cool! Looking forward to the next version!\r\n\r\nQuick answer for retriBERT is that I expected a simple projection to generalize better and more importantly only having to store the gradients for the proj means training with larger batches :) If you want to keep chatting about it, feel free to send me an email!",
"Hi @ManuelFay ! \r\nIf you're still interested in completing the FQuAD dataset card, note that we've generated one that is pre-filled.\r\nTherefore feel free to complete it with the content you already have in your README.md.\r\nThis would be awesome ! And thanks again for your contribution :)",
"Yo @lhoestq , just not sure about the tag table at the top, I used @yjernite eli5 template so hope it's okay ! Also want to signal the streamlit app for dataset tagging has a weird behavior with the size categories when filling in the form. \r\n\r\nThanks to you guys for doing that and sorry about the time it took, i completely forgot about it ! \r\n"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/2951 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2951/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2951/comments | https://api.github.com/repos/huggingface/datasets/issues/2951/events | https://github.com/huggingface/datasets/pull/2951 | 1,001,267,888 | PR_kwDODunzps4r-lGs | 2,951 | Dummy labels no longer on by default in `to_tf_dataset` | [] | closed | false | null | 2 | 2021-09-20T18:26:59Z | 2021-09-21T14:00:57Z | 2021-09-21T10:14:32Z | null | After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2951/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2951/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2951.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2951",
"merged_at": "2021-09-21T10:14:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2951.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2951"
} | true | [
"@lhoestq Let me make sure we never need it, and if not then I'll remove it entirely in a follow-up PR.",
"Thanks ;) it will be less confusing and easier to maintain to not keep unused hacky features"
] |
https://api.github.com/repos/huggingface/datasets/issues/3087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3087/comments | https://api.github.com/repos/huggingface/datasets/issues/3087/events | https://github.com/huggingface/datasets/issues/3087 | 1,026,780,469 | I_kwDODunzps49M201 | 3,087 | Removing label column in a text classification dataset yields to errors | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-10-14T20:12:50Z | 2021-10-15T10:11:04Z | 2021-10-15T10:11:04Z | null | ## Describe the bug
This looks like #3059 but it's not linked to the cache this time. Removing the `label` column from a text classification dataset and then performing any processing will result in an error.
To reproduce:
```py
from datasets import load_dataset
from transformers import AutoTokenizer
raw_datasets = load_dataset("imdb")
raw_datasets = raw_datasets.remove_columns("label")
model_checkpoint = "distilbert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
context_length = 128
def tokenize_pad_and_truncate(texts):
return tokenizer(texts["text"], truncation=True, padding="max_length", max_length=context_length)
tokenized_datasets = raw_datasets.map(tokenize_pad_and_truncate, batched=True)
```
Traceback:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-1-ba61bb32f786> in <module>
12 return tokenizer(texts["text"], truncation=True, padding="max_length", max_length=context_length)
13
---> 14 tokenized_datasets = raw_datasets.map(tokenize_pad_and_truncate, batched=True)
~/git/datasets/src/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc)
500 desc=desc,
501 )
--> 502 for k, dataset in self.items()
503 }
504 )
~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0)
500 desc=desc,
501 )
--> 502 for k, dataset in self.items()
503 }
504 )
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2051 new_fingerprint=new_fingerprint,
2052 disable_tqdm=disable_tqdm,
-> 2053 desc=desc,
2054 )
2055 else:
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
501 self: "Dataset" = kwargs.pop("self")
502 # apply actual function
--> 503 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
504 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
505 for dataset in datasets:
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
468 }
469 # apply actual function
--> 470 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
471 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
472 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
404 # Call actual function
405
--> 406 out = func(self, *args, **kwargs)
407
408 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2243 if os.path.exists(cache_file_name) and load_from_cache_file:
2244 logger.warning("Loading cached processed dataset at %s", cache_file_name)
-> 2245 info = self.info.copy()
2246 info.features = features
2247 info.task_templates = None
~/git/datasets/src/datasets/info.py in copy(self)
278
279 def copy(self) -> "DatasetInfo":
--> 280 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
281
282
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
177 for idx, template in enumerate(self.task_templates):
178 if isinstance(template, TextClassification):
--> 179 labels = self.features[template.label_column].names
180 self.task_templates[idx] = TextClassification(
181 text_column=template.text_column, label_column=template.label_column, labels=labels
KeyError: 'label'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3087/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3087/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/404/comments | https://api.github.com/repos/huggingface/datasets/issues/404/events | https://github.com/huggingface/datasets/pull/404 | 658,400,987 | MDExOlB1bGxSZXF1ZXN0NDUwMzY4Mjg4 | 404 | Add seed in metrics | [] | closed | false | null | 0 | 2020-07-16T17:27:05Z | 2020-07-20T10:12:35Z | 2020-07-20T10:12:34Z | null | With #361 we noticed that some metrics were not deterministic.
In this PR I allow the user to specify numpy's seed when instantiating a metric with `load_metric`.
The seed is set only when `compute` is called, and reset afterwards.
Moreover when calling `compute` with the same metric instance (i.e. same experiment_id), the metric will always return the same results given the same inputs. This is the case even if the seed is was not specified by the user, as the previous seed is going to be reused.
However, instantiating twice a metric (two different experiments) without specifying a seed can create different results. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/404/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/404/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/404.diff",
"html_url": "https://github.com/huggingface/datasets/pull/404",
"merged_at": "2020-07-20T10:12:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/404.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/404"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/813/comments | https://api.github.com/repos/huggingface/datasets/issues/813/events | https://github.com/huggingface/datasets/issues/813 | 738,489,852 | MDU6SXNzdWU3Mzg0ODk4NTI= | 813 | How to implement DistributedSampler with datasets | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 4 | 2020-11-08T15:27:11Z | 2022-10-05T12:54:23Z | 2022-10-05T12:54:23Z | null | Hi,
I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them.
I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using datasets in which datasets are iterative? To give you more context, I have multiple of datasets and I need to write sampler for this case. thanks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/813/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/813/timeline | null | completed | null | null | false | [
"Hi Apparently I need to shard the data and give one host a chunk, could you provide me please with examples on how to do it? I want to use it jointly with finetune_trainer.py in huggingface repo seq2seq examples. thanks. ",
"Hey @rabeehkarimimahabadi I'm actually looking for the same feature. Did you manage to get somewhere?",
"@rabeehkarimimahabadi need the same feature",
"Hi! I think you can use the `accelerate` library for that, which implements distributed sampling."
] |
https://api.github.com/repos/huggingface/datasets/issues/5340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5340/comments | https://api.github.com/repos/huggingface/datasets/issues/5340/events | https://github.com/huggingface/datasets/pull/5340 | 1,483,182,158 | PR_kwDODunzps5EtWo3 | 5,340 | Clean up DatasetInfo and Dataset docstrings | [] | closed | false | null | 1 | 2022-12-08T00:17:53Z | 2022-12-08T19:33:14Z | 2022-12-08T19:30:10Z | null | This PR cleans up the docstrings for `DatasetInfo` and about half of the methods in `Dataset`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5340/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5340/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5340.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5340",
"merged_at": "2022-12-08T19:30:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5340.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5340"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2861/comments | https://api.github.com/repos/huggingface/datasets/issues/2861/events | https://github.com/huggingface/datasets/pull/2861 | 985,081,871 | MDExOlB1bGxSZXF1ZXN0NzI0NDM2OTcw | 2,861 | fix: 🐛 be more specific when catching exceptions | [] | closed | false | null | 6 | 2021-09-01T12:18:12Z | 2021-09-02T09:53:36Z | 2021-09-02T09:52:03Z | null | The same specific exception is catched in other parts of the same
function. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2861/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2861/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2861.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2861",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2861.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2861"
} | true | [
"To give more context: after our discussion, if I understood properly, you are trying to fix a call to `datasets` that takes 15 minutes: https://github.com/huggingface/datasets-preview-backend/issues/17 Is this right?\r\n\r\n",
"Yes, that's it. And to do that I'm trying to use https://pypi.org/project/stopit/, which will raise a stopit.TimeoutException exception. But currently, if this exception is raised, it's caught and considered as a \"FileNotFoundError\" while it should not be caught. ",
"And what about passing the `timeout` parameter instead?",
"It might be a good idea, but I would have to add a timeout argument to several methods, I'm not sure we want that (I want to ensure all my queries in https://github.com/huggingface/datasets-preview-backend/tree/master/src/datasets_preview_backend/queries resolve in a given time, be it with an error in case of timeout, or with the successful response). The methods are `prepare_module`, `import_main_class`, *builder_cls.*`get_all_exported_dataset_infos`, `load_dataset_builder`, and `load_dataset`",
"I understand, you are trying to find a fix for your use case. OK.\r\n\r\nJust note that it is also an issue for `datasets` users. Once #2859 fixed in `datasets`, you will no longer have this issue...",
"Closing, since 1. my problem is more #2859, and I was asking for that change in order to make a hack work on my side, 2. if we want to change how exceptions are handled, we surely want to do it on all the codebase, not only in this particular case."
] |
https://api.github.com/repos/huggingface/datasets/issues/2040 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2040/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2040/comments | https://api.github.com/repos/huggingface/datasets/issues/2040/events | https://github.com/huggingface/datasets/issues/2040 | 830,169,387 | MDU6SXNzdWU4MzAxNjkzODc= | 2,040 | ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk | [] | closed | false | null | 4 | 2021-03-12T14:27:00Z | 2021-08-04T18:00:43Z | 2021-08-04T18:00:43Z | null | Hi there,
I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):
```python
concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']])
```
Yielding the following error:
```python
ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk.
However datasets' indices [1] come from memory and datasets' indices [0] come from disk.
```
Been trying to solve this for quite some time now. Both `DataDict` have been created by reading in a `csv` via `load_dataset` and subsequently processed using the various `datasets` methods (i.e. filter, map, remove col, rename col). Can't figure out tho...
`load_from_disk(PATH_DATA_CLS_A)['train']` yields:
```python
Dataset({
features: ['labels', 'text'],
num_rows: 785
})
```
`load_from_disk(PATH_DATA_CLS_B)['train']` yields:
```python
Dataset({
features: ['labels', 'text'],
num_rows: 3341
})
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2040/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2040/timeline | null | completed | null | null | false | [
"Hi ! To help me understand the situation, can you print the values of `load_from_disk(PATH_DATA_CLS_A)['train']._indices_data_files` and `load_from_disk(PATH_DATA_CLS_B)['train']._indices_data_files` ?\r\nThey should both have a path to an arrow file\r\n\r\nAlso note that from #2025 concatenating datasets will no longer have such restrictions.",
"Sure, thanks for the fast reply!\r\n\r\nFor dataset A: `[{'filename': 'drive/MyDrive/data_target_task/dataset_a/train/cache-4797266bf4db1eb7.arrow'}]`\r\nFor dataset B: `[]`\r\n\r\nNo clue why for B it returns nothing. `PATH_DATA_CLS_B` is exactly the same in `save_to_disk` and `load_from_disk`... Also I can verify that the folder physically exists under 'drive/MyDrive/data_target_task/dataset_b/'",
"In the next release you'll be able to concatenate any kinds of dataset (either from memory or from disk).\r\n\r\nFor now I'd suggest you to flatten the indices of the A and B datasets. This will remove the indices mapping and you will be able to concatenate them. You can flatten the indices with\r\n```python\r\ndataset = dataset.flatten_indices()\r\n```",
"Indeed this works. Not the most elegant solution, but it does the trick. Thanks a lot! "
] |
https://api.github.com/repos/huggingface/datasets/issues/3838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3838/comments | https://api.github.com/repos/huggingface/datasets/issues/3838/events | https://github.com/huggingface/datasets/issues/3838 | 1,161,137,406 | I_kwDODunzps5FNYz- | 3,838 | Add a data type for labeled images (image segmentation) | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 0 | 2022-03-07T09:38:15Z | 2022-04-10T13:34:59Z | null | null | It might be a mix of Image and ClassLabel, and the color palette might be generated automatically.
---
### Example
every pixel in the images of the annotation column (in https://huggingface.co/datasets/scene_parse_150) has a value that gives its class, and the dataset itself is associated with a color palette (eg https://github.com/open-mmlab/mmsegmentation/blob/98a353b674c6052d319e7de4e5bcd65d670fcf84/mmseg/datasets/ade.py#L47) that maps every class with a color.
So we might want to render the image as a colored image instead of a black and white one.
<img width="785" alt="156741519-fbae6844-2606-4c28-837e-279d83d00865" src="https://user-images.githubusercontent.com/1676121/157005263-7058c584-2b70-465a-ad94-8a982f726cf4.png">
See https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/features/labeled_image.py for reference in Tensorflow | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3838/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3838/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4897/comments | https://api.github.com/repos/huggingface/datasets/issues/4897/events | https://github.com/huggingface/datasets/issues/4897 | 1,351,784,727 | I_kwDODunzps5QkpkX | 4,897 | datasets generate large arrow file | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-08-26T05:51:16Z | 2022-09-18T05:07:52Z | 2022-09-18T05:07:52Z | null | Checking the large file in disk, and found the large cache file in the cifar10 data directory:

As we know, the size of cifar10 dataset is ~130MB, but the cache file has almost 30GB size, there may be some problems here. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4897/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4897/timeline | null | completed | null | null | false | [
"Hi ! The cache files are the results of all the transforms you applied to the dataset using `map` for example.\r\nDid you run a transform that could potentially blow up the size of the dataset ?",
"@lhoestq,\r\nI don't remember, but I can't imagine what kind of transform may generate data that grow over 200 times in size. \r\nI think maybe it doesn' matter, it's just cache after all."
] |
https://api.github.com/repos/huggingface/datasets/issues/897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/897/comments | https://api.github.com/repos/huggingface/datasets/issues/897/events | https://github.com/huggingface/datasets/issues/897 | 752,100,256 | MDU6SXNzdWU3NTIxMDAyNTY= | 897 | Dataset viewer issues | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | 5 | 2020-11-27T09:14:34Z | 2021-10-31T09:12:01Z | 2021-10-31T09:12:01Z | null | I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though:
- the URL is still under `nlp`, perhaps an alias for `datasets` can be made
- when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user
```bash
IndexError: list index out of range
Traceback:
File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 316, in <module>
st.table(style)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 122, in wrapped_method
return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 367, in _enqueue_new_element_delta
rv = marshall_element(msg.delta.new_element)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 120, in marshall_element
return method(dg, element, *args, **kwargs)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 2944, in table
data_frame_proto.marshall_data_frame(data, element.table)
File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 54, in marshall_data_frame
_marshall_styles(proto_df.style, df, styler)
File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 73, in _marshall_styles
translated_style = styler._translate()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/pandas/io/formats/style.py", line 351, in _translate
* (len(clabels[0]) - len(hidden_columns))
```
- there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https://huggingface.co/nlp/viewer/?dataset=wmt19&config=cs-en). This problem goes away when you enable "List view", because then some syntax highlighteris used, and the special characters are coded correctly.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/897/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/897/timeline | null | completed | null | null | false | [
"Thanks for reporting !\r\ncc @srush for the empty feature list issue and the encoding issue\r\ncc @julien-c maybe we can update the url and just have a redirection from the old url to the new one ?",
"Ok, I redirected on our side to a new url. ⚠️ @srush: if you update the Streamlit config too to `/datasets/viewer`, let me know because I'll need to change our nginx config at the same time",
"9",
"⠀⠀⠀ ⠀ ",
"⠀⠀⠀ ⠀ "
] |
https://api.github.com/repos/huggingface/datasets/issues/53 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/53/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/53/comments | https://api.github.com/repos/huggingface/datasets/issues/53/events | https://github.com/huggingface/datasets/pull/53 | 613,436,158 | MDExOlB1bGxSZXF1ZXN0NDE0MTkwMzkz | 53 | [Features] Typo in generate_from_dict | [] | closed | false | null | 0 | 2020-05-06T16:05:23Z | 2020-05-07T15:28:46Z | 2020-05-07T15:28:45Z | null | Change `isinstance` test in features when generating features from dict. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/53/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/53/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/53.diff",
"html_url": "https://github.com/huggingface/datasets/pull/53",
"merged_at": "2020-05-07T15:28:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/53.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/53"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1653 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1653/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1653/comments | https://api.github.com/repos/huggingface/datasets/issues/1653/events | https://github.com/huggingface/datasets/pull/1653 | 775,632,945 | MDExOlB1bGxSZXF1ZXN0NTQ2Mjc0Njc0 | 1,653 | harem dataset: add data splits info | [] | closed | false | null | 0 | 2020-12-28T23:58:20Z | 2020-12-30T16:49:03Z | 2020-12-30T16:49:03Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1653/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1653/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1653.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1653",
"merged_at": "2020-12-30T16:49:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1653.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1653"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/999/comments | https://api.github.com/repos/huggingface/datasets/issues/999/events | https://github.com/huggingface/datasets/pull/999 | 755,246,786 | MDExOlB1bGxSZXF1ZXN0NTMwOTk1MTY3 | 999 | add generated_reviews_enth | [] | closed | false | null | 0 | 2020-12-02T12:50:43Z | 2020-12-03T11:17:28Z | 2020-12-03T11:17:28Z | null | `generated_reviews_enth` is created as part of [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf) for machine translation task. This dataset (referred to as `generated_reviews_yn` in [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf)) are English product reviews generated by [CTRL](https://arxiv.org/abs/1909.05858), translated by Google Translate API and annotated as accepted or rejected (`correct`) based on fluency and adequacy of the translation by human annotators. This allows it to be used for English-to-Thai translation quality esitmation (binary label), machine translation, and sentiment analysis. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/999/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/999/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/999.diff",
"html_url": "https://github.com/huggingface/datasets/pull/999",
"merged_at": "2020-12-03T11:17:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/999.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/999"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5426/comments | https://api.github.com/repos/huggingface/datasets/issues/5426/events | https://github.com/huggingface/datasets/issues/5426 | 1,535,158,555 | I_kwDODunzps5bgKkb | 5,426 | CI tests are broken: SchemaInferenceError | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2023-01-16T16:02:07Z | 2023-06-02T06:40:32Z | 2023-01-16T16:49:04Z | null | CI test (unit, ubuntu-latest, deps-minimum) is broken, raising a `SchemaInferenceError`: see https://github.com/huggingface/datasets/actions/runs/3930901593/jobs/6721492004
```
FAILED tests/test_beam.py::BeamBuilderTest::test_download_and_prepare_sharded - datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data
```
Stack trace:
```
______________ BeamBuilderTest.test_download_and_prepare_sharded _______________
[gw1] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python
self = <tests.test_beam.BeamBuilderTest testMethod=test_download_and_prepare_sharded>
@require_beam
def test_download_and_prepare_sharded(self):
import apache_beam as beam
original_write_parquet = beam.io.parquetio.WriteToParquet
expected_num_examples = len(get_test_dummy_examples())
with tempfile.TemporaryDirectory() as tmp_cache_dir:
builder = DummyBeamDataset(cache_dir=tmp_cache_dir, beam_runner="DirectRunner")
with patch("apache_beam.io.parquetio.WriteToParquet") as write_parquet_mock:
write_parquet_mock.side_effect = partial(original_write_parquet, num_shards=2)
> builder.download_and_prepare()
tests/test_beam.py:97:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/builder.py:864: in download_and_prepare
**download_and_prepare_kwargs,
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/builder.py:1976: in _download_and_prepare
num_examples, num_bytes = beam_writer.finalize(metrics.query(m_filter))
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:694: in finalize
shard_num_bytes, _ = parquet_to_arrow(source, destination)
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:740: in parquet_to_arrow
num_bytes, num_examples = writer.finalize()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <datasets.arrow_writer.ArrowWriter object at 0x7f6dcbb3e810>
close_stream = True
def finalize(self, close_stream=True):
self.write_rows_on_file()
# In case current_examples < writer_batch_size, but user uses finalize()
if self._check_duplicates:
self.check_duplicate_keys()
# Re-intializing to empty list for next batch
self.hkey_record = []
self.write_examples_on_file()
# If schema is known, infer features even if no examples were written
if self.pa_writer is None and self.schema:
self._build_writer(self.schema)
if self.pa_writer is not None:
self.pa_writer.close()
self.pa_writer = None
if close_stream:
self.stream.close()
else:
if close_stream:
self.stream.close()
> raise SchemaInferenceError("Please pass `features` or at least one example when writing data")
E datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:593: SchemaInferenceError
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5426/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5426/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5528 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5528/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5528/comments | https://api.github.com/repos/huggingface/datasets/issues/5528/events | https://github.com/huggingface/datasets/pull/5528 | 1,582,195,085 | PR_kwDODunzps5J13wC | 5,528 | Push to hub in a pull request | [] | open | false | null | 10 | 2023-02-13T11:43:47Z | 2023-03-21T14:32:12Z | null | null | Fixes #5492.
Introduce new kwarg `create_pr` in `push_to_hub`, which is passed to `HFapi.upload_file`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5528/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5528/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5528.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5528",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5528.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5528"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5528). All of your documentation changes will be reflected on that endpoint.",
"It seems that the parameter `create_pr` is available for [`0.8.0`](https://huggingface.co/docs/huggingface_hub/v0.8.1/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file) (its not here: [`0.7.0`](https://huggingface.co/docs/huggingface_hub/v0.7.0.rc0/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file)) and onwards. I included a warning, informing the user that no PR was created.",
"@nateraw you are completely right! Actually, the dataset shards is never added to the created pr, only the metadata, as the code is now. Ill look into you suggestion asap. Thank!",
"@nateraw Nothing more to add, that's a perfect usage of `huggingface_hub` as far as I can tell ! :fire: \r\n\r\nA very nit improvement would be to use the [for .. else ... python statement](https://book.pythontips.com/en/latest/for_-_else.html).\r\ni.e:\r\n\r\n```py\r\nif create_pr is True and revision is not None:\r\n for discussion in get_repo_discussions(repo_id, repo_type='dataset'):\r\n if discussion.is_pull_request and discussion.git_reference == revision:\r\n create_pr = False\r\n break\r\n else:\r\n raise ValueError(\"Provided revision not found\")\r\n```\r\nNo need for the `revision_found` temporary flag when do so. Yeah ok, it's niche :wink: ",
"I added the suggestions from @nateraw and @Wauplin .",
"> Thanks. Some comments/suggestions below...\r\n> \r\n> Why have you removed the test for create_pr? You could add it again and just add a pytest skipif when version of huggingface_hub is lower than 0.8.1.\r\n\r\nI have added the test again. I removed it because i kept getting errors when calling `create_pull_request` with `repo_id=ds_name` where `temporary_repo = ds_name`, and thought i might look more thoroughly at it later. I have added a test called `test_test` showing this, it gives:\r\n```\r\ntests/test_upstream_hub.py:360: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n.venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124: in _inner_fn\r\n return fn(*args, **kwargs)\r\n.venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py:3451: in create_pull_request\r\n return self.create_discussion(\r\n.venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124: in _inner_fn\r\n return fn(*args, **kwargs)\r\n.venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py:3393: in create_discussion\r\n hf_raise_for_status(resp)\r\n(...)\r\nE huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-63ecd2cb-2cf2557a332c86ad27f687b3)\r\nE \r\nE Repository Not Found for url: https://huggingface.co/api/models/__DUMMY_TRANSFORMERS_USER__/test-16764648321590/discussions.\r\nE Please make sure you specified the correct `repo_id` and `repo_type`.\r\nE If you are trying to access a private or gated repo, make sure you are authenticated.\r\nE Invalid username or password.\r\n```",
"> > Thanks. Some comments/suggestions below...\r\n> > Why have you removed the test for create_pr? You could add it again and just add a pytest skipif when version of huggingface_hub is lower than 0.8.1.\r\n> \r\n> I have added the test again. I removed it because i kept getting errors when calling `create_pull_request` with `repo_id=ds_name` where `temporary_repo = ds_name`, and thought i might look more thoroughly at it later. I have added a test called `test_test` showing this, it gives:\r\n> \r\n> ```\r\n> tests/test_upstream_hub.py:360: \r\n> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n> .venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124: in _inner_fn\r\n> return fn(*args, **kwargs)\r\n> .venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py:3451: in create_pull_request\r\n> return self.create_discussion(\r\n> .venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124: in _inner_fn\r\n> return fn(*args, **kwargs)\r\n> .venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py:3393: in create_discussion\r\n> hf_raise_for_status(resp)\r\n> (...)\r\n> E huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-63ecd2cb-2cf2557a332c86ad27f687b3)\r\n> E \r\n> E Repository Not Found for url: https://huggingface.co/api/models/__DUMMY_TRANSFORMERS_USER__/test-16764648321590/discussions.\r\n> E Please make sure you specified the correct `repo_id` and `repo_type`.\r\n> E If you are trying to access a private or gated repo, make sure you are authenticated.\r\n> E Invalid username or password.\r\n> ```\r\n\r\n@albertvillanova, @lhoestq , FYI I have looked at this again, and i haven't figured it out, so the test`test_push_dataset_to_hub_with_pull_request` and the minimal example `test_test` are still failing locally, while the other tests succeed. Do you have any advice?",
"I tried to move all of the \"create pr safely\"-logic to a seperate function in `_hf_hub_fixes`. I looked at how the exceptions were raised before `huggingface_hub.utils.RepositoryNotFoundError`existed, and make changes accordingly. ",
"`create_pr` was set during `push_to_hub`, even though it was `None` from the outset, hence causing tests to fail for older versions of `huggingface_hub`. This is now fixed.\r\n\r\nWith the implementation of `_hf_hub_fixes.upload_file` the function call expected `commit_message`, `commit_description`. If these are not set we call the function without them, even though we are on a version of `huggingface_hub` where they are not available in `upload_file`.\r\n\r\nWhen `huggingface_hub < 0.5.0` we assume `repo_id` of them form `organisation/name`, so now that we are calling `create_repo` in the tests with `repo_id` not of this form, we need to handle this case, this is now done.\r\n\r\nMany tests failed for `dataset_dict` for the above reasons, so the fixes from `arrow_dataset.py` were also added to `dataset_dict.py`. \r\n\r\n**All tests are now passing locally for `huggingface_hub==0.2.0` and `huggingface_hub==0.12.1`…** Im sorry I should have downgraded and went through this a long time ago, but I didn’t realise the extend of these version fixes until recently…",
"Hi ! FYI bumped the `huggingface-hub` dependency to 0.11 and removed the `_hf_hub_fixes.py` - which should make this PR much easier"
] |
https://api.github.com/repos/huggingface/datasets/issues/3978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3978/comments | https://api.github.com/repos/huggingface/datasets/issues/3978/events | https://github.com/huggingface/datasets/issues/3978 | 1,175,226,456 | I_kwDODunzps5GDIhY | 3,978 | I can't view HFcallback dataset for ASR Space | [] | open | false | null | 3 | 2022-03-21T11:07:49Z | 2022-04-04T13:34:38Z | null | null | ## Dataset viewer issue for '*Urdu-ASR-flags*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/kingabzpro/Urdu-ASR-flags)*
*I think dataset should show some thing and if you want me to add script, please show me the documentation. I thought this was suppose to be automatic task.*
Am I the one who added this dataset ? Yes
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3978/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3978/timeline | null | null | null | null | false | [
"the dataset viewer is working on this dataset. I imagine the issue is that we would expect to be able to listen to the audio files in the `Please Record Your Voice file` column, right?\r\n\r\nmaybe @lhoestq or @albertvillanova could help\r\n\r\n<img width=\"1019\" alt=\"Capture d’écran 2022-03-24 à 17 36 20\" src=\"https://user-images.githubusercontent.com/1676121/159966006-57dcf8f7-b65f-4200-ac8c-66859318a8bb.png\">\r\n",
"The structure of the dataset is not supported. Only the CSV file is parsed and the audio files are ignored.\r\n\r\nWe're working on supporting audio datasets with a specific structure in #3963 ",
"Got it."
] |
https://api.github.com/repos/huggingface/datasets/issues/754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/754/comments | https://api.github.com/repos/huggingface/datasets/issues/754/events | https://github.com/huggingface/datasets/pull/754 | 727,863,105 | MDExOlB1bGxSZXF1ZXN0NTA4NjczNzM2 | 754 | Use full released xsum dataset | [] | closed | false | null | 3 | 2020-10-23T03:29:49Z | 2021-01-01T03:11:56Z | 2020-10-26T12:56:58Z | null | #672 Fix xsum to expand coverage and include IDs
Code based on parser from older version of `datasets/xsum/xsum.py`
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/754/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/754/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/754.diff",
"html_url": "https://github.com/huggingface/datasets/pull/754",
"merged_at": "2020-10-26T12:56:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/754.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/754"
} | true | [
"@lhoestq I took a shot at addressing your comments but the build scripts seem to be complaining about not being able to open dummy files. How do I resolve those errors without copying the full dataset into the dummy dir?",
"Could you check that the names of the dummy data files are right ?\r\nYou can use \r\n```\r\ndatasets-cli dummy_data ./datasets/xum\r\n```\r\nto print the expected file names",
"Ok @lhoestq looks like I got the tests to pass :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3137/comments | https://api.github.com/repos/huggingface/datasets/issues/3137/events | https://github.com/huggingface/datasets/pull/3137 | 1,033,363,652 | PR_kwDODunzps4tievk | 3,137 | Fix numpy deprecation warning for ragged tensors | [] | closed | false | null | 1 | 2021-10-22T09:17:46Z | 2021-10-22T16:04:15Z | 2021-10-22T16:04:14Z | null | Numpy shows a deprecation warning when we call `np.array` on a list of ragged tensors without specifying the `dtype`. If their shapes match, the tensors can be collated together, otherwise the resulting array should have `dtype=np.object`.
Fix #3084
cc @Rocketknight1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3137/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3137/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3137.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3137",
"merged_at": "2021-10-22T16:04:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3137.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3137"
} | true | [
"This'll be a really helpful fix, thank you!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4784 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4784/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4784/comments | https://api.github.com/repos/huggingface/datasets/issues/4784/events | https://github.com/huggingface/datasets/issues/4784 | 1,326,395,280 | I_kwDODunzps5PDy-Q | 4,784 | Add Multiface dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] | open | false | null | 3 | 2022-08-02T21:00:22Z | 2022-08-08T14:42:36Z | null | null | ## Adding a Dataset
- **Name:** Multiface dataset
- **Description:** f high quality recordings of the faces of 13 identities, each captured in a multi-view capture stage performing various facial expressions. An average of 12,200 (v1 scripts) to 23,000 (v2 scripts) frames per subject with capture rate at 30 fps
- **Data:** https://github.com/facebookresearch/multiface
The whole dataset is 65TB though, so I'm not sure
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4784/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4784/timeline | null | null | null | null | false | [
"Hi @osanseviero I would like to add this dataset.",
"Hey @nandwalritik! Thanks for offering to help!\r\n\r\nThis dataset might be somewhat complex and I'm concerned about it being 65 TB, which would be quite expensive to host. @lhoestq @mariosasko I would love your input if you think it's worth adding this dataset.",
"Thanks for proposing this interesting dataset, @osanseviero.\r\n\r\nPlease note that the data files are already hosted in a third-party server: e.g. the index of data files for entity \"6795937\" is at https://fb-baas-f32eacb9-8abb-11eb-b2b8-4857dd089e15.s3.amazonaws.com/MugsyDataRelease/v0.0/identities/6795937/index.html \r\n- audio.tar: https://fb-baas-f32eacb9-8abb-11eb-b2b8-4857dd089e15.s3.amazonaws.com/MugsyDataRelease/v0.0/identities/6795937/audio.tar\r\n- ...\r\n\r\nTherefore, in principle, we don't need to host them on our Hub: it would be enough to just implement a loading script in the corresponding Hub dataset repo, e.g. \"facebook/multiface\"..."
] |
https://api.github.com/repos/huggingface/datasets/issues/2234 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2234/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2234/comments | https://api.github.com/repos/huggingface/datasets/issues/2234/events | https://github.com/huggingface/datasets/pull/2234 | 860,442,246 | MDExOlB1bGxSZXF1ZXN0NjE3MzI4NDU3 | 2,234 | Fix bash snippet formatting in ADD_NEW_DATASET.md | [] | closed | false | null | 0 | 2021-04-17T16:01:08Z | 2021-04-19T10:57:31Z | 2021-04-19T07:51:36Z | null | This PR indents the paragraphs around the bash snippets in ADD_NEW_DATASET.md to fix formatting. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2234/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2234/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2234.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2234",
"merged_at": "2021-04-19T07:51:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2234.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2234"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3937/comments | https://api.github.com/repos/huggingface/datasets/issues/3937/events | https://github.com/huggingface/datasets/issues/3937 | 1,170,832,006 | I_kwDODunzps5FyXqG | 3,937 | Missing languages in lvwerra/github-code dataset | [
{
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets",
"id": 2067401494,
"name": "Dataset discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion"
}
] | closed | false | null | 5 | 2022-03-16T10:32:03Z | 2022-03-22T07:09:23Z | 2022-03-21T14:50:47Z | null | Hi,
I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset!
I've noticed that two languages are missing from the dataset: TypeScript and Scala.
Looks like they're also omitted from the query you used to get the original code.
Are there any plans to add them in the future?
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3937/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3937/timeline | null | completed | null | null | false | [
"Thanks for contacting @Eytan-S.\r\n\r\nI think @lvwerra could better answer this. ",
"That seems to be an oversight - I originally planned to include them in the dataset and for some reason they were in the list of languages but not in the query. Since there is an issue with the deduplication step I'll rerun the pipeline anyway and will double check the query.\r\n\r\nThanks for reporting this @Eytan-S!",
"Can confirm that the two languages are indeed missing from the dataset. Here are the file counts per language:\r\n```Python\r\n{'Assembly': 82847,\r\n 'Batchfile': 236755,\r\n 'C': 14127969,\r\n 'C#': 6793439,\r\n 'C++': 7368473,\r\n 'CMake': 175076,\r\n 'CSS': 1733625,\r\n 'Dockerfile': 331966,\r\n 'FORTRAN': 141963,\r\n 'GO': 2259363,\r\n 'Haskell': 340521,\r\n 'HTML': 11165464,\r\n 'Java': 19515696,\r\n 'JavaScript': 11829024,\r\n 'Julia': 58177,\r\n 'Lua': 576279,\r\n 'Makefile': 679338,\r\n 'Markdown': 8454049,\r\n 'PHP': 11181930,\r\n 'Perl': 497490,\r\n 'PowerShell': 136827,\r\n 'Python': 7203553,\r\n 'Ruby': 4479767,\r\n 'Rust': 321765,\r\n 'SQL': 655657,\r\n 'Scala': 0,\r\n 'Shell': 1382786,\r\n 'TypeScript': 0,\r\n 'TeX': 250764,\r\n 'Visual Basic': 155371}\r\n ```",
"@Eytan-S check out v1.1 of the `github-code` dataset where issue should be fixed:\r\n\r\n| | Language |File Count| Size (GB)|\r\n|---:|:-------------|---------:|-------:|\r\n| 0 | Java | 19548190 | 107.7 |\r\n| 1 | C | 14143113 | 183.83 |\r\n| 2 | JavaScript | 11839883 | 87.82 |\r\n| 3 | HTML | 11178557 | 118.12 |\r\n| 4 | PHP | 11177610 | 61.41 |\r\n| 5 | Markdown | 8464626 | 23.09 |\r\n| 6 | C++ | 7380520 | 87.73 |\r\n| 7 | Python | 7226626 | 52.03 |\r\n| 8 | C# | 6811652 | 36.83 |\r\n| 9 | Ruby | 4473331 | 10.95 |\r\n| 10 | GO | 2265436 | 19.28 |\r\n| 11 | TypeScript | 1940406 | 24.59 |\r\n| 12 | CSS | 1734406 | 22.67 |\r\n| 13 | Shell | 1385648 | 3.01 |\r\n| 14 | Scala | 835755 | 3.87 |\r\n| 15 | Makefile | 679430 | 2.92 |\r\n| 16 | SQL | 656671 | 5.67 |\r\n| 17 | Lua | 578554 | 2.81 |\r\n| 18 | Perl | 497949 | 4.7 |\r\n| 19 | Dockerfile | 366505 | 0.71 |\r\n| 20 | Haskell | 340623 | 1.85 |\r\n| 21 | Rust | 322431 | 2.68 |\r\n| 22 | TeX | 251015 | 2.15 |\r\n| 23 | Batchfile | 236945 | 0.7 |\r\n| 24 | CMake | 175282 | 0.54 |\r\n| 25 | Visual Basic | 155652 | 1.91 |\r\n| 26 | FORTRAN | 142038 | 1.62 |\r\n| 27 | PowerShell | 136846 | 0.69 |\r\n| 28 | Assembly | 82905 | 0.78 |\r\n| 29 | Julia | 58317 | 0.29 |",
"Thanks @lvwerra. "
] |
https://api.github.com/repos/huggingface/datasets/issues/706 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/706/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/706/comments | https://api.github.com/repos/huggingface/datasets/issues/706/events | https://github.com/huggingface/datasets/pull/706 | 713,721,959 | MDExOlB1bGxSZXF1ZXN0NDk2OTkwMDA0 | 706 | Fix config creation for data files with NamedSplit | [] | closed | false | null | 0 | 2020-10-02T15:46:49Z | 2020-10-05T08:15:00Z | 2020-10-05T08:14:59Z | null | During config creation, we need to iterate through the data files of all the splits to compute a hash.
To make sure the hash is unique given a certain combination of files/splits, we sort the split names.
However the `NamedSplit` objects can't be passed to `sorted` and currently it raises an error: we need to sort the string of their names instead.
Fix #705 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/706/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/706/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/706.diff",
"html_url": "https://github.com/huggingface/datasets/pull/706",
"merged_at": "2020-10-05T08:14:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/706.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/706"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4667 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4667/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4667/comments | https://api.github.com/repos/huggingface/datasets/issues/4667/events | https://github.com/huggingface/datasets/issues/4667 | 1,299,735,703 | I_kwDODunzps5NeGSX | 4,667 | Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | 0 | 2022-07-09T18:03:15Z | 2022-07-11T07:47:15Z | 2022-07-11T07:47:15Z | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4667/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4667/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1147/comments | https://api.github.com/repos/huggingface/datasets/issues/1147/events | https://github.com/huggingface/datasets/pull/1147 | 757,502,199 | MDExOlB1bGxSZXF1ZXN0NTMyODY4MzU2 | 1,147 | Vinay/add/telugu books | [] | closed | false | null | 0 | 2020-12-05T01:17:02Z | 2020-12-05T16:36:04Z | 2020-12-05T16:36:04Z | null | Real data tests are failing as this dataset needs to be manually downloaded | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1147/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1147/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1147.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1147",
"merged_at": "2020-12-05T16:36:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1147.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1147"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3604 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3604/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3604/comments | https://api.github.com/repos/huggingface/datasets/issues/3604/events | https://github.com/huggingface/datasets/issues/3604 | 1,108,477,316 | I_kwDODunzps5CEgWE | 3,604 | Dataset Viewer not showing Previews for Private Datasets | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 2 | 2022-01-19T19:29:26Z | 2022-09-26T08:04:43Z | 2022-09-26T08:04:43Z | null | ## Dataset viewer issue for 'abidlabs/test-audio-13'
It seems that the dataset viewer does not show previews for `private` datasets, even for the user who's private dataset it is. See [1] for example. If I change the visibility to public, then it does show, but it would be useful to have the viewer even for private datasets.

**Link:**
[1] https://huggingface.co/datasets/abidlabs/test-audio-13
**Am I the one who added this dataset?**
Yes
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3604/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3604/timeline | null | completed | null | null | false | [
"Sure, it's on the roadmap.",
"Closing in favor of https://github.com/huggingface/datasets-server/issues/39."
] |
https://api.github.com/repos/huggingface/datasets/issues/5914 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5914/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5914/comments | https://api.github.com/repos/huggingface/datasets/issues/5914/events | https://github.com/huggingface/datasets/issues/5914 | 1,731,483,996 | I_kwDODunzps5nNFlc | 5,914 | array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size in Datasets | [] | open | false | null | 0 | 2023-05-30T04:25:00Z | 2023-05-30T04:25:00Z | null | null | ### Describe the bug
When using the `filter` or `map` function to preprocess a dataset, a ValueError is encountered with the error message "array is too big; arr.size * arr.dtype.itemsize is larger than the maximum possible size."
Detailed error message:
Traceback (most recent call last):
File "data_processing.py", line 26, in <module>
processed_dataset[split] = samromur_children[split].map(prepare_dataset, cache_file_name=cache_dict[split],writer_batch_size = 50)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2405, in map
desc=desc,
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 524, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/fingerprint.py", line 480, in wrapper
out = func(self, *args, **kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2756, in _map_single
example = apply_function_on_filtered_inputs(example, i, offset=offset)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2347, in decorated
result = f(decorated_item, *args, **kwargs)
File "data_processing.py", line 11, in prepare_dataset
audio = batch["audio"]
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 123, in __getitem__
value = decode_nested_example(self.features[key], value) if value is not None else None
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/features.py", line 1260, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/audio.py", line 156, in decode_example
array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/audio.py", line 257, in _decode_non_mp3_path_like
array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/librosa/core/audio.py", line 176, in load
y, sr_native = __soundfile_load(path, offset, duration, dtype)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/librosa/core/audio.py", line 222, in __soundfile_load
y = sf_desc.read(frames=frame_duration, dtype=dtype, always_2d=False).T
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/soundfile.py", line 891, in read
out = self._create_empty_array(frames, always_2d, dtype)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/soundfile.py", line 1323, in _create_empty_array
return np.empty(shape, dtype, order='C')
ValueError: array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size.
### Steps to reproduce the bug
```python
from datasets import load_dataset, DatasetDict
from transformers import WhisperFeatureExtractor
from transformers import WhisperTokenizer
samromur_children= load_dataset("language-and-voice-lab/samromur_children")
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small")
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="icelandic", task="transcribe")
def prepare_dataset(batch):
# load and resample audio data from 48 to 16kHz
audio = batch["audio"]
# compute log-Mel input features from input audio array
batch["input_features"] = feature_extractor(audio["array"], sampling_rate=16000).input_features[0]
# encode target text to label ids
batch["labels"] = tokenizer(batch["normalized_text"]).input_ids
return batch
cache_dict = {"train": "./cache/audio_train.cache", \
"validation": "./cache/audio_validation.cache", \
"test": "./cache/audio_test.cache"}
filter_cache_dict = {"train": "./cache/filter_train.arrow", \
"validation": "./cache/filter_validation.arrow", \
"test": "./cache/filter_test.arrow"}
print("before filtering")
print(samromur_children)
#filter the dataset to only include examples with more than 2 seconds of audio
samromur_children = samromur_children.filter(lambda example: example["audio"]["array"].shape[0] > 16000*2, cache_file_names=filter_cache_dict)
print("after filtering")
print(samromur_children)
processed_dataset = DatasetDict()
# processed_dataset = samromur_children.map(prepare_dataset, cache_file_names=cache_dict, num_proc=10,)
for split in ["train", "validation", "test"]:
processed_dataset[split] = samromur_children[split].map(prepare_dataset, cache_file_name=cache_dict[split])
```
### Expected behavior
The dataset is successfully processed and ready to train the model.
### Environment info
Python version: 3.7.13
datasets package version: 2.4.0
librosa package version: 0.10.0.post2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5914/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5914/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4390 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4390/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4390/comments | https://api.github.com/repos/huggingface/datasets/issues/4390/events | https://github.com/huggingface/datasets/pull/4390 | 1,244,835,877 | PR_kwDODunzps44RoXs | 4,390 | Fix metadata validation | [] | closed | false | null | 1 | 2022-05-23T09:11:20Z | 2022-06-01T09:27:52Z | 2022-06-01T09:19:25Z | null | Since Python 3.8, the typing module:
- raises an AttributeError when trying to access `__args__` on any type, e.g.: `List.__args__`
- provides the `get_args` function instead: `get_args(List)`
This PR implements a fix for Python >=3.8 whereas maintaining backward compatibility. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4390/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4390/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4390.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4390",
"merged_at": "2022-06-01T09:19:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4390.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4390"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1894/comments | https://api.github.com/repos/huggingface/datasets/issues/1894/events | https://github.com/huggingface/datasets/issues/1894 | 809,609,654 | MDU6SXNzdWU4MDk2MDk2NTQ= | 1,894 | benchmarking against MMapIndexedDataset | [] | open | false | null | 3 | 2021-02-16T20:04:58Z | 2021-02-17T18:52:28Z | null | null | I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory and runs 1% slower for `wikitext103` (~1GB of tokens).
Questions:
1) Is this (basically identical) performance expected?
2) Is there a scenario where this library will outperform `MMapIndexedDataset`? (maybe more examples/larger examples?)
3) Should I be using different benchmarking tools than `psrecord`/how do you guys do benchmarks?
Thanks in advance! Sam | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1894/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1894/timeline | null | null | null | null | false | [
"Hi sam !\r\nIndeed we can expect the performances to be very close since both MMapIndexedDataset and the `datasets` implem use memory mapping. With memory mapping what determines the I/O performance is the speed of your hard drive/SSD.\r\n\r\nIn terms of performance we're pretty close to the optimal speed for reading text, even though I found recently that we could still slightly improve speed for big datasets (see [here](https://github.com/huggingface/datasets/issues/1803)).\r\n\r\nIn terms of number of examples and example sizes, the only limit is the available disk space you have.\r\n\r\nI haven't used `psrecord` yet but it seems to be a very interesting tool for benchmarking. Currently for benchmarks we only have github actions to avoid regressions in terms of speed. But it would be cool to have benchmarks with comparisons with other dataset tools ! This would be useful to many people",
"Also I would be interested to know what data types `MMapIndexedDataset` supports. Is there some documentation somewhere ?",
"no docs haha, it's written to support integer numpy arrays.\r\n\r\nYou can build one in fairseq with, roughly:\r\n```bash\r\n\r\nwget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip\r\nunzip wikitext-103-raw-v1.zip\r\nexport dd=$HOME/fairseq-py/wikitext-103-raw\r\n\r\nexport mm_dir=$HOME/mmap_wikitext2\r\nmkdir -p gpt2_bpe\r\nwget -O gpt2_bpe/encoder.json https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json\r\nwget -O gpt2_bpe/vocab.bpe https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe\r\nwget -O gpt2_bpe/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt\r\nfor SPLIT in train valid; do \\\r\n python -m examples.roberta.multiprocessing_bpe_encoder \\\r\n --encoder-json gpt2_bpe/encoder.json \\\r\n --vocab-bpe gpt2_bpe/vocab.bpe \\\r\n --inputs /scratch/stories_small/${SPLIT}.txt \\\r\n --outputs /scratch/stories_small/${SPLIT}.bpe \\\r\n --keep-empty \\\r\n --workers 60; \\\r\ndone\r\n\r\nmkdir -p $mm_dir\r\nfairseq-preprocess \\\r\n --only-source \\\r\n --srcdict gpt2_bpe/dict.txt \\\r\n --trainpref $dd/wiki.train.bpe \\\r\n --validpref $dd/wiki.valid.bpe \\\r\n --destdir $mm_dir \\\r\n --workers 60 \\\r\n --dataset-impl mmap\r\n```\r\n\r\nI'm noticing in my benchmarking that it's much smaller on disk than arrow (200mb vs 900mb), and that both incur significant cost by increasing the number of data loader workers. \r\nThis somewhat old [post](https://ray-project.github.io/2017/10/15/fast-python-serialization-with-ray-and-arrow.html) suggests there are some gains to be had from using `pyarrow.serialize(array).tobuffer()`. I haven't yet figured out how much of this stuff `pa.Table` does under the hood.\r\n\r\nThe `MMapIndexedDataset` bottlenecks we are working on improving (by using arrow) are:\r\n1) `MMapIndexedDataset`'s index, which stores offsets, basically gets read in its entirety by each dataloading process.\r\n2) we have separate, identical, `MMapIndexedDatasets` on each dataloading worker, so there's redundancy there; we wonder if there is a way that arrow can somehow dedupe these in shared memory.\r\n\r\nIt will take me a few hours to get `MMapIndexedDataset` benchmarks out of `fairseq`/onto a branch in this repo, but I'm happy to invest the time if you're interested in collaborating on some performance hacking."
] |
https://api.github.com/repos/huggingface/datasets/issues/5171 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5171/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5171/comments | https://api.github.com/repos/huggingface/datasets/issues/5171/events | https://github.com/huggingface/datasets/pull/5171 | 1,425,355,111 | PR_kwDODunzps5BpsXf | 5,171 | Add PB and TB in convert_file_size_to_int | [] | closed | false | null | 1 | 2022-10-27T09:50:31Z | 2022-10-27T12:14:27Z | 2022-10-27T12:12:30Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5171/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5171/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5171.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5171",
"merged_at": "2022-10-27T12:12:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5171.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5171"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3597 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3597/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3597/comments | https://api.github.com/repos/huggingface/datasets/issues/3597/events | https://github.com/huggingface/datasets/issues/3597 | 1,108,092,864 | I_kwDODunzps5CDCfA | 3,597 | ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-01-19T13:19:28Z | 2022-08-05T12:35:51Z | 2022-02-14T08:46:34Z | null | ## Bug
The install of streaming dataset is giving following error.
## Steps to reproduce the bug
```python
! git clone https://github.com/huggingface/datasets.git
! cd datasets
! pip install -e ".[streaming]"
```
## Actual results
Cloning into 'datasets'...
remote: Enumerating objects: 50816, done.
remote: Counting objects: 100% (2356/2356), done.
remote: Compressing objects: 100% (1606/1606), done.
remote: Total 50816 (delta 834), reused 1741 (delta 525), pack-reused 48460
Receiving objects: 100% (50816/50816), 72.47 MiB | 27.68 MiB/s, done.
Resolving deltas: 100% (22541/22541), done.
Checking out files: 100% (6722/6722), done.
ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3597/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3597/timeline | null | completed | null | null | false | [
"Hi! The `cd` command in Jupyer/Colab needs to start with `%`, so this should work:\r\n```\r\n!git clone https://github.com/huggingface/datasets.git\r\n%cd datasets\r\n!pip install -e \".[streaming]\"\r\n```",
"thanks @mariosasko i had the same mistake and your solution is what was needed"
] |
https://api.github.com/repos/huggingface/datasets/issues/1399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1399/comments | https://api.github.com/repos/huggingface/datasets/issues/1399/events | https://github.com/huggingface/datasets/pull/1399 | 760,499,576 | MDExOlB1bGxSZXF1ZXN0NTM1MzIwNzA2 | 1,399 | Add HoVer Dataset | [] | closed | false | null | 2 | 2020-12-09T16:55:39Z | 2020-12-14T10:57:23Z | 2020-12-14T10:57:22Z | null | HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification
https://arxiv.org/abs/2011.03088 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1399/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1399/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1399.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1399",
"merged_at": "2020-12-14T10:57:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1399.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1399"
} | true | [
"@lhoestq all comments addressed :) ",
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/3691 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3691/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3691/comments | https://api.github.com/repos/huggingface/datasets/issues/3691/events | https://github.com/huggingface/datasets/pull/3691 | 1,127,629,306 | PR_kwDODunzps4yQThV | 3,691 | Upgrade black to version ~=22.0 | [] | closed | false | null | 0 | 2022-02-08T18:45:19Z | 2022-02-08T19:56:40Z | 2022-02-08T19:56:39Z | null | Upgrades the `datasets` library quality tool `black` to use the first stable release of `black`, version 22.0. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3691/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3691/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3691.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3691",
"merged_at": "2022-02-08T19:56:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3691.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3691"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/59 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/59/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/59/comments | https://api.github.com/repos/huggingface/datasets/issues/59/events | https://github.com/huggingface/datasets/pull/59 | 614,366,045 | MDExOlB1bGxSZXF1ZXN0NDE0OTM3NTgx | 59 | Fix tests | [] | closed | false | null | 5 | 2020-05-07T21:48:09Z | 2020-05-08T10:57:57Z | 2020-05-08T10:46:51Z | null | @patrickvonplaten I've broken a bit the tests with #25 while simplifying and re-organizing the `load.py` and `download_manager.py` scripts.
I'm trying to fix them here but I have a weird error, do you think you can have a look?
```bash
(datasets) MacBook-Pro-de-Thomas:datasets thomwolf$ python -m pytest -sv ./tests/test_dataset_common.py::DatasetTest::test_builder_class_snli
============================================================================= test session starts =============================================================================
platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /Users/thomwolf/miniconda2/envs/datasets/bin/python
cachedir: .pytest_cache
rootdir: /Users/thomwolf/Documents/GitHub/datasets
plugins: xdist-1.31.0, forked-1.1.3
collected 1 item
tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ERROR
=================================================================================== ERRORS ====================================================================================
____________________________________________________________ ERROR at setup of DatasetTest.test_builder_class_snli ____________________________________________________________
file_path = <module 'tests.test_dataset_common' from '/Users/thomwolf/Documents/GitHub/datasets/tests/test_dataset_common.py'>
download_config = DownloadConfig(cache_dir=None, force_download=False, resume_download=False, local_files_only=False, proxies=None, user_agent=None, extract_compressed_file=True, force_extract=True)
download_kwargs = {}
def setup_module(file_path: str, download_config: Optional[DownloadConfig] = None, **download_kwargs,) -> DatasetBuilder:
r"""
Download/extract/cache a dataset to add to the lib from a path or url which can be:
- a path to a local directory containing the dataset processing python script
- an url to a S3 directory with a dataset processing python script
Dataset codes are cached inside the lib to allow easy import (avoid ugly sys.path tweaks)
and using cloudpickle (among other things).
Return: tuple of
the unique id associated to the dataset
the local path to the dataset
"""
if download_config is None:
download_config = DownloadConfig(**download_kwargs)
download_config.extract_compressed_file = True
download_config.force_extract = True
> name = list(filter(lambda x: x, file_path.split("/")))[-1] + ".py"
E AttributeError: module 'tests.test_dataset_common' has no attribute 'split'
src/nlp/load.py:169: AttributeError
============================================================================== warnings summary ===============================================================================
/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15
/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
-- Docs: https://docs.pytest.org/en/latest/warnings.html
=========================================================================== short test summary info ===========================================================================
ERROR tests/test_dataset_common.py::DatasetTest::test_builder_class_snli - AttributeError: module 'tests.test_dataset_common' has no attribute 'split'
========================================================================= 1 warning, 1 error in 3.63s =========================================================================
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/59/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/59/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/59.diff",
"html_url": "https://github.com/huggingface/datasets/pull/59",
"merged_at": "2020-05-08T10:46:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/59.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/59"
} | true | [
"I can fix the tests tomorrow :-) ",
"Very weird bug indeed! I think the problem was that when importing `setup_module` we overwrote `pytest's` setup_module function. I think this is the relevant code in pytest: https://github.com/pytest-dev/pytest/blob/9d2eabb397b059b75b746259daeb20ee5588f559/src/_pytest/python.py#L460.",
"Also PR: #25 introduced some renaming: `DatasetBuilder.builder_config` -> `DatasetBuilder.config` so that we will have to change most of the dataset scripts (Just replace the \"builder_config\" with \"config\").\r\n\r\nI think the renaming is a good idea and I can do the fix with a bash regex, but will have to re-upload most of the datasets. @thomwolf @mariamabarham \r\n\r\n",
"> Also PR: #25 introduced some renaming: `DatasetBuilder.builder_config` -> `DatasetBuilder.config` so that we will have to change most of the dataset scripts (Just replace the \"builder_config\" with \"config\").\r\n> \r\n> I think the renaming is a good idea and I can do the fix with a bash regex, but will have to re-upload most of the datasets. @thomwolf @mariamabarham\r\n\r\nI think if it only needs a re-uploading, we can rename it, `DatasetBuilder.config` is easier and sounds better",
"Ok seems to be fine. Most tests work - merging."
] |
https://api.github.com/repos/huggingface/datasets/issues/2718 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2718/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2718/comments | https://api.github.com/repos/huggingface/datasets/issues/2718/events | https://github.com/huggingface/datasets/pull/2718 | 953,360,663 | MDExOlB1bGxSZXF1ZXN0Njk3NDE0NTQy | 2,718 | New documentation structure | [] | closed | false | null | 5 | 2021-07-26T23:15:13Z | 2021-09-13T17:20:53Z | 2021-09-13T17:20:52Z | null | Organize Datasets documentation into four documentation types to improve clarity and discoverability of content.
**Content to add in the very short term (feel free to add anything I'm missing):**
- A discussion on why Datasets uses Arrow that includes some context and background about why we use Arrow. Would also be great to talk about Datasets speed and performance here, and if you can share any benchmarking/tests you did, that would be awesome! Finally, a discussion about how memory-mapping frees the user from RAM constraints would be very helpful.
- Explain why you would want to disable or override verifications when loading a dataset.
- If possible, include a code sample of when the number of elements in the field of an output dictionary aren’t the same as the other fields in the output dictionary (taken from the [note](https://huggingface.co/docs/datasets/processing.html#augmenting-the-dataset) here). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2718/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2718/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2718.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2718",
"merged_at": "2021-09-13T17:20:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2718.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2718"
} | true | [
"I just did some minor changes + added some content in these sections: share, about arrow, about cache\r\n\r\nFeel free to mark this PR as ready for review ! :)",
"I just separated the `Share` How-to page into three pages: share, dataset_script and dataset_card.\r\n\r\nThis way in the share page we can explain in more details how to share a community or a canonical dataset - focus in their differences and the steps to upload them.\r\n\r\nAlso given that making a dataset script or a dataset card both require several steps, I feel like it's better to have dedicated pages for them.\r\n\r\nLet me know what you think @stevhliu and others. We can still revert this change if you feel like it was better with everything in the same place.",
"I just added some minor changes to match the style, fix typos, etc. Great work on the conceptual guides, I learned a lot from them and I'm sure they will help a lot of other people too!\r\n\r\nI am fine with splitting `Share` into three separate pages. I think this probably makes it easier for users to navigate, instead of having to scroll up and down on a really long single page.",
"Thanks a lot for all the suggestions ! I'm doing the final changes based on the remaining comments, then we can merge and release v1.12 of `datasets` and the new documentation ^^",
"Alright I think I took all the suggestions and comments into account :)\r\nThanks everyone for the help !"
] |
https://api.github.com/repos/huggingface/datasets/issues/5025 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5025/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5025/comments | https://api.github.com/repos/huggingface/datasets/issues/5025/events | https://github.com/huggingface/datasets/issues/5025 | 1,386,011,239 | I_kwDODunzps5SnNpn | 5,025 | Custom Json Dataset Throwing Error when batch is False | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-09-26T12:38:39Z | 2022-09-27T19:50:00Z | 2022-09-27T19:50:00Z | null | ## Describe the bug
A clear and concise description of what the bug is.
I tried to create my custom dataset using below code
```
from datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D
from torchvision import transforms
from transformers import AutoProcessor
# we'll use the Auto API here - it will load LayoutLMv3Processor behind the scenes,
# based on the checkpoint we provide from the hub
from datasets import load_dataset
def prepare_examples(examples):
#Some preporcessing for each image and text as all my data saved in cloud
#For this reason I couldn't set the batch to True.
encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,
truncation=True, padding="max_length")
# encoding['pixel_values']=np.array(encoding['pixel_values'])
return encoding
dataset = load_dataset("json", data_files='issues.jsonl')
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
features = dataset["train"].features
column_names = dataset["train"].column_names
# we need to define custom features for `set_format` (used later on) to work properly
features = Features({
'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'labels': Sequence(feature=Value(dtype='int64')),
})
train_dataset = dataset["train"].map(
prepare_examples,
batched=False,
remove_columns=column_names,
features=features
)
```
It throws below error.
```
/opt/conda/lib/python3.7/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type)
172 storage = to_pyarrow_listarray(data, pa_type)
--> 173 return pa.ExtensionArray.from_storage(pa_type, storage)
174
/opt/conda/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.ExtensionArray.from_storage()
TypeError: Incompatible storage type list<item: list<item: list<item: list<item: float>>>> for extension type extension<arrow.py_extension_type<Array3DExtensionType>>
```
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
rom datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D
from torchvision import transforms
from transformers import AutoProcessor
# we'll use the Auto API here - it will load LayoutLMv3Processor behind the scenes,
# based on the checkpoint we provide from the hub
from datasets import load_dataset
def prepare_examples(examples):
#Some preporcessing for each image and text as all my data saved in cloud
encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,
truncation=True, padding="max_length")
# encoding['pixel_values']=np.array(encoding['pixel_values'])
return encoding
dataset = load_dataset("json", data_files='issues.jsonl')
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
features = dataset["train"].features
column_names = dataset["train"].column_names
# we need to define custom features for `set_format` (used later on) to work properly
features = Features({
'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'labels': Sequence(feature=Value(dtype='int64')),
})
train_dataset = dataset["train"].map(
prepare_examples,
batched=False,
remove_columns=column_names,
features=features
)
## Expected results
A clear and concise description of the expected results.
Expected would be similar to all the otherdatasets with no error.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Unix
- Python version: 3.9
- PyArrow version: 9.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5025/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5025/timeline | null | completed | null | null | false | [
"Hi! Our processors are meant to be used in `batched` mode, so if `batched` is `False`, you need to drop the batch dimension (the error message warns you that the array has an extra dimension meaning it's 4D instead of 3D) to avoid the error:\r\n```python\r\ndef prepare_examples(examples):\r\n #Some preporcessing for each image and text as all my data saved in cloud\r\n #For this reason I couldn't set the batch to True. \r\n encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,\r\n truncation=True, padding=\"max_length\", return_tensors=\"np\")\r\n # drop extra dim\r\n for k in encoding.items():\r\n encoding[k]=encoding[k][0]\r\n return encoding\r\n```",
"> Hi! Our processors are meant to be used in `batched` mode, so if `batched` is `False`, you need to drop the batch dimension (the error message warns you that the array has an extra dimension meaning it's 4D instead of 3D) to avoid the error:\r\n> \r\n> ```python\r\n> def prepare_examples(examples):\r\n> #Some preporcessing for each image and text as all my data saved in cloud\r\n> #For this reason I couldn't set the batch to True. \r\n> encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,\r\n> truncation=True, padding=\"max_length\", return_tensors=\"np\")\r\n> # drop extra dim\r\n> for k in encoding.items():\r\n> encoding[k]=encoding[k][0]\r\n> return encoding\r\n> ```\r\n\r\nThank you it did work\r\n\r\n```\r\nfor k,v in encoding.items():\r\n encoding[k]=encoding[k][0]\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/2871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2871/comments | https://api.github.com/repos/huggingface/datasets/issues/2871/events | https://github.com/huggingface/datasets/issues/2871 | 989,436,088 | MDU6SXNzdWU5ODk0MzYwODg= | 2,871 | datasets.config.PYARROW_VERSION has no attribute 'major' | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 5 | 2021-09-06T21:06:57Z | 2021-09-08T08:51:52Z | 2021-09-08T08:51:52Z | null | In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.
```
import datasets
datasets.config.PYARROW_VERSION.major
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module>
1 import datasets
----> 2 datasets.config.PYARROW_VERSION.major
AttributeError: 'str' object has no attribute 'major'
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2871/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2871/timeline | null | completed | null | null | false | [
"I have changed line 288 to `if int(datasets.config.PYARROW_VERSION.split(\".\")[0]) < 3:` just to get around it.",
"Hi @bwang482,\r\n\r\nI'm sorry but I'm not able to reproduce your bug.\r\n\r\nPlease note that in our current master branch, we made a commit (d03223d4d64b89e76b48b00602aba5aa2f817f1e) that simultaneously modified:\r\n- test_dataset_common.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-a1bc225bd9a5bade373d1f140e24d09cbbdc97971c2f73bb627daaa803ada002L289 that introduces the usage of `datasets.config.PYARROW_VERSION.major`\r\n- but also changed config.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-e021fcfc41811fb970fab889b8d245e68382bca8208e63eaafc9a396a336f8f2L40, so that `datasets.config.PYARROW_VERSION.major` exists\r\n",
"Sorted. Thanks!",
"Reopening this. Although the `test_dataset_common.py` script works fine now.\r\n\r\nHas this got something to do with my pull request not passing `ci/circleci: run_dataset_script_tests_pyarrow` tests?\r\n\r\nhttps://github.com/huggingface/datasets/pull/2873",
"Hi @bwang482,\r\n\r\nIf you click on `Details` (on the right of your non passing CI test names: `ci/circleci: run_dataset_script_tests_pyarrow`), you can have more information about the non-passing tests.\r\n\r\nFor example, for [\"ci/circleci: run_dataset_script_tests_pyarrow_1\" details](https://circleci.com/gh/huggingface/datasets/46324?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link), you can see that the only non-passing test has to do with the dataset card (missing information in the `README.md` file): `test_changed_dataset_card`\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_cards.py::test_changed_dataset_card[swedish_medical_ner]\r\n= 1 failed, 3214 passed, 2874 skipped, 2 xfailed, 1 xpassed, 15 warnings in 175.59s (0:02:55) =\r\n```\r\n\r\nTherefore, your PR non-passing test has nothing to do with this issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/5993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5993/comments | https://api.github.com/repos/huggingface/datasets/issues/5993/events | https://github.com/huggingface/datasets/issues/5993 | 1,776,643,555 | I_kwDODunzps5p5W3j | 5,993 | ValueError: Table schema does not match schema used to create file | [] | closed | false | null | 2 | 2023-06-27T10:54:07Z | 2023-06-27T15:36:42Z | 2023-06-27T15:32:44Z | null | ### Describe the bug
Saving a dataset as parquet fails with a `ValueError: Table schema does not match schema used to create file` if the dataset was obtained out of a `.select_columns()` call with columns selected out of order.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict(
{
"x1": [1, 2, 3],
"x2": [10, 11, 12],
}
)
ds = dataset.select_columns(["x2", "x1"])
ds.to_parquet("demo.parquet")
```
```shell
>>>
ValueError: Table schema does not match schema used to create file:
table:
x2: int64
x1: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53 vs.
file:
x1: int64
x2: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53
```
---
I think this is because after the `.select_columns()` call with out of order columns, the output dataset features' schema ends up being out of sync with the schema of the arrow table backing it.
```python
ds.features.arrow_schema
>>>
x1: int64
x2: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53
ds.data.schema
>>>
x2: int64
x1: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53
```
So when we call `.to_parquet()`, the call behind the scenes to `datasets.io.parquet.ParquetDatasetWriter(...).write()` which initialises the backend `pyarrow.parquet.ParquetWriter` with `schema = self.dataset.features.arrow_schema` triggers `pyarrow` on write when [it checks](https://github.com/apache/arrow/blob/11b140a734a516e436adaddaeb35d23f30dcce44/python/pyarrow/parquet/core.py#L1086-L1090) that the `ParquetWriter` schema matches the schema of the table being written 🙌
https://github.com/huggingface/datasets/blob/6ed837325cb539a5deb99129e5ad181d0269e050/src/datasets/io/parquet.py#L139-L141
### Expected behavior
The dataset gets successfully saved as parquet.
*In the same way as it does if saving it as csv:
```python
import datasets
dataset = datasets.Dataset.from_dict(
{
"x1": [1, 2, 3],
"x2": [10, 11, 12],
}
)
ds = dataset.select_columns(["x2", "x1"])
ds.to_csv("demo.csv")
```
### Environment info
`python==3.11`
`datasets==2.13.1`
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5993/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5993/timeline | null | completed | null | null | false | [
"We'll do a new release of `datasets` soon to make the fix available :)\r\n\r\nIn the meantime you can use `datasets` from source (main)",
"Thank you very much @lhoestq ! 🚀 "
] |
https://api.github.com/repos/huggingface/datasets/issues/4869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4869/comments | https://api.github.com/repos/huggingface/datasets/issues/4869/events | https://github.com/huggingface/datasets/pull/4869 | 1,345,513,758 | PR_kwDODunzps49hBGY | 4,869 | Fix typos in documentation | [] | closed | false | null | 1 | 2022-08-21T15:10:03Z | 2022-08-22T09:25:39Z | 2022-08-22T09:09:58Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4869/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4869/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4869.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4869",
"merged_at": "2022-08-22T09:09:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4869.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4869"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2595 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2595/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2595/comments | https://api.github.com/repos/huggingface/datasets/issues/2595/events | https://github.com/huggingface/datasets/issues/2595 | 937,483,120 | MDU6SXNzdWU5Mzc0ODMxMjA= | 2,595 | ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-07-06T03:20:55Z | 2021-07-06T05:59:49Z | 2021-07-06T05:59:49Z | null | Error traceback:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-8-a7b592d3bca0> in <module>()
1 from datasets import load_dataset, load_metric
2
----> 3 common_voice_train = load_dataset("common_voice", "pa-IN", split="train+validation")
4 common_voice_test = load_dataset("common_voice", "pa-IN", split="test")
9 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/common_voice/078d412587e9efeb0ae2e574da99c31e18844c496008d53dc5c60f4159ed639b/common_voice.py in <module>()
19
20 import datasets
---> 21 from datasets.tasks import AutomaticSpeechRecognition
22
23
ModuleNotFoundError: No module named 'datasets.tasks' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2595/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2595/timeline | null | completed | null | null | false | [
"Hi @profsatwinder.\r\n\r\nIt looks like you are using an old version of `datasets`. Please update it with `pip install -U datasets` and indicate if the problem persists.",
"@albertvillanova Thanks for the information. I updated it to 1.9.0 and the issue is resolved. Thanks again. "
] |
https://api.github.com/repos/huggingface/datasets/issues/652 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/652/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/652/comments | https://api.github.com/repos/huggingface/datasets/issues/652/events | https://github.com/huggingface/datasets/pull/652 | 705,390,850 | MDExOlB1bGxSZXF1ZXN0NDkwMTI3MjIx | 652 | handle connection error in download_prepared_from_hf_gcs | [] | closed | false | null | 0 | 2020-09-21T08:21:11Z | 2020-09-21T08:28:43Z | 2020-09-21T08:28:42Z | null | Fix #647 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/652/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/652/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/652.diff",
"html_url": "https://github.com/huggingface/datasets/pull/652",
"merged_at": "2020-09-21T08:28:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/652.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/652"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/959 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/959/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/959/comments | https://api.github.com/repos/huggingface/datasets/issues/959/events | https://github.com/huggingface/datasets/pull/959 | 754,418,610 | MDExOlB1bGxSZXF1ZXN0NTMwMzIxOTM1 | 959 | Add Tunizi Dataset | [] | closed | false | null | 0 | 2020-12-01T13:59:39Z | 2020-12-03T14:21:41Z | 2020-12-03T14:21:40Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/959/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/959/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/959.diff",
"html_url": "https://github.com/huggingface/datasets/pull/959",
"merged_at": "2020-12-03T14:21:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/959.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/959"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/5928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5928/comments | https://api.github.com/repos/huggingface/datasets/issues/5928/events | https://github.com/huggingface/datasets/pull/5928 | 1,744,098,371 | PR_kwDODunzps5SUXPC | 5,928 | Fix link to quickstart docs in README.md | [] | closed | false | null | 3 | 2023-06-06T15:23:01Z | 2023-06-06T15:52:34Z | 2023-06-06T15:43:53Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5928/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5928/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5928.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5928",
"merged_at": "2023-06-06T15:43:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5928.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5928"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006693 / 0.011353 (-0.004660) | 0.004331 / 0.011008 (-0.006677) | 0.098022 / 0.038508 (0.059514) | 0.032764 / 0.023109 (0.009654) | 0.295812 / 0.275898 (0.019914) | 0.325029 / 0.323480 (0.001550) | 0.005779 / 0.007986 (-0.002206) | 0.005381 / 0.004328 (0.001052) | 0.075785 / 0.004250 (0.071535) | 0.048759 / 0.037052 (0.011707) | 0.308986 / 0.258489 (0.050497) | 0.348000 / 0.293841 (0.054159) | 0.027686 / 0.128546 (-0.100860) | 0.008839 / 0.075646 (-0.066807) | 0.328389 / 0.419271 (-0.090883) | 0.062173 / 0.043533 (0.018640) | 0.312257 / 0.255139 (0.057119) | 0.325024 / 0.283200 (0.041824) | 0.103886 / 0.141683 (-0.037797) | 1.440215 / 1.452155 (-0.011940) | 1.528665 / 1.492716 (0.035948) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210082 / 0.018006 (0.192076) | 0.442480 / 0.000490 (0.441990) | 0.006559 / 0.000200 (0.006359) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026774 / 0.037411 (-0.010637) | 0.108362 / 0.014526 (0.093837) | 0.117631 / 0.176557 (-0.058926) | 0.176657 / 0.737135 (-0.560478) | 0.124154 / 0.296338 (-0.172184) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428136 / 0.215209 (0.212927) | 4.270287 / 2.077655 (2.192632) | 2.014728 / 1.504120 (0.510608) | 1.806772 / 1.541195 (0.265577) | 1.946284 / 1.468490 (0.477794) | 0.525542 / 4.584777 (-4.059235) | 3.667025 / 3.745712 (-0.078687) | 1.878751 / 5.269862 (-3.391111) | 1.048321 / 4.565676 (-3.517356) | 0.065550 / 0.424275 (-0.358725) | 0.011881 / 0.007607 (0.004274) | 0.529873 / 0.226044 (0.303829) | 5.289641 / 2.268929 (3.020712) | 2.489403 / 55.444624 (-52.955221) | 2.141037 / 6.876477 (-4.735440) | 2.230735 / 2.142072 (0.088662) | 0.639781 / 4.805227 (-4.165447) | 0.141410 / 6.500664 (-6.359254) | 0.064374 / 0.075469 (-0.011095) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.159462 / 1.841788 (-0.682325) | 14.524730 / 8.074308 (6.450422) | 13.578070 / 10.191392 (3.386678) | 0.152138 / 0.680424 (-0.528286) | 0.017255 / 0.534201 (-0.516946) | 0.387607 / 0.579283 (-0.191676) | 0.413652 / 0.434364 (-0.020712) | 0.453644 / 0.540337 (-0.086693) | 0.550051 / 1.386936 (-0.836885) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006668 / 0.011353 (-0.004685) | 0.004677 / 0.011008 (-0.006331) | 0.075950 / 0.038508 (0.037442) | 0.032439 / 0.023109 (0.009329) | 0.381839 / 0.275898 (0.105941) | 0.419411 / 0.323480 (0.095931) | 0.005813 / 0.007986 (-0.002172) | 0.004090 / 0.004328 (-0.000238) | 0.075052 / 0.004250 (0.070802) | 0.048453 / 0.037052 (0.011401) | 0.388076 / 0.258489 (0.129587) | 0.431793 / 0.293841 (0.137952) | 0.028408 / 0.128546 (-0.100138) | 0.009028 / 0.075646 (-0.066618) | 0.082569 / 0.419271 (-0.336702) | 0.046772 / 0.043533 (0.003239) | 0.380182 / 0.255139 (0.125043) | 0.401828 / 0.283200 (0.118629) | 0.105388 / 0.141683 (-0.036294) | 1.453356 / 1.452155 (0.001201) | 1.561483 / 1.492716 (0.068767) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.008922 / 0.018006 (-0.009084) | 0.444112 / 0.000490 (0.443623) | 0.002756 / 0.000200 (0.002556) | 0.000104 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030408 / 0.037411 (-0.007003) | 0.112924 / 0.014526 (0.098399) | 0.124625 / 0.176557 (-0.051932) | 0.176915 / 0.737135 (-0.560220) | 0.129141 / 0.296338 (-0.167198) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448197 / 0.215209 (0.232987) | 4.476548 / 2.077655 (2.398893) | 2.243977 / 1.504120 (0.739857) | 2.054060 / 1.541195 (0.512865) | 2.130680 / 1.468490 (0.662190) | 0.526815 / 4.584777 (-4.057962) | 3.759312 / 3.745712 (0.013600) | 3.333618 / 5.269862 (-1.936244) | 1.579611 / 4.565676 (-2.986065) | 0.065714 / 0.424275 (-0.358561) | 0.011939 / 0.007607 (0.004332) | 0.550313 / 0.226044 (0.324269) | 5.476946 / 2.268929 (3.208018) | 2.726521 / 55.444624 (-52.718104) | 2.364977 / 6.876477 (-4.511499) | 2.450624 / 2.142072 (0.308551) | 0.647174 / 4.805227 (-4.158053) | 0.141265 / 6.500664 (-6.359399) | 0.065493 / 0.075469 (-0.009976) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249702 / 1.841788 (-0.592085) | 15.205647 / 8.074308 (7.131338) | 14.678310 / 10.191392 (4.486918) | 0.141539 / 0.680424 (-0.538884) | 0.017323 / 0.534201 (-0.516878) | 0.387602 / 0.579283 (-0.191681) | 0.415106 / 0.434364 (-0.019258) | 0.458146 / 0.540337 (-0.082192) | 0.553318 / 1.386936 (-0.833618) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008567 / 0.011353 (-0.002786) | 0.005245 / 0.011008 (-0.005763) | 0.115074 / 0.038508 (0.076566) | 0.032567 / 0.023109 (0.009458) | 0.352297 / 0.275898 (0.076399) | 0.393403 / 0.323480 (0.069923) | 0.006402 / 0.007986 (-0.001583) | 0.004353 / 0.004328 (0.000025) | 0.087903 / 0.004250 (0.083653) | 0.048424 / 0.037052 (0.011372) | 0.370078 / 0.258489 (0.111588) | 0.410192 / 0.293841 (0.116351) | 0.042396 / 0.128546 (-0.086150) | 0.014426 / 0.075646 (-0.061220) | 0.411358 / 0.419271 (-0.007914) | 0.059546 / 0.043533 (0.016013) | 0.364721 / 0.255139 (0.109582) | 0.385100 / 0.283200 (0.101901) | 0.100572 / 0.141683 (-0.041111) | 1.741457 / 1.452155 (0.289302) | 1.933134 / 1.492716 (0.440418) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217177 / 0.018006 (0.199171) | 0.510399 / 0.000490 (0.509909) | 0.005542 / 0.000200 (0.005342) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026852 / 0.037411 (-0.010559) | 0.125580 / 0.014526 (0.111054) | 0.132164 / 0.176557 (-0.044392) | 0.189073 / 0.737135 (-0.548063) | 0.135980 / 0.296338 (-0.160358) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.601924 / 0.215209 (0.386715) | 5.891397 / 2.077655 (3.813743) | 2.389494 / 1.504120 (0.885375) | 2.044013 / 1.541195 (0.502818) | 2.019367 / 1.468490 (0.550877) | 0.883807 / 4.584777 (-3.700970) | 5.141349 / 3.745712 (1.395636) | 2.607415 / 5.269862 (-2.662446) | 1.567268 / 4.565676 (-2.998409) | 0.102738 / 0.424275 (-0.321537) | 0.013480 / 0.007607 (0.005873) | 0.744979 / 0.226044 (0.518934) | 7.404182 / 2.268929 (5.135254) | 2.983406 / 55.444624 (-52.461219) | 2.331847 / 6.876477 (-4.544630) | 2.465119 / 2.142072 (0.323047) | 1.106725 / 4.805227 (-3.698502) | 0.205779 / 6.500664 (-6.294885) | 0.081019 / 0.075469 (0.005550) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.527840 / 1.841788 (-0.313947) | 16.989487 / 8.074308 (8.915179) | 18.016123 / 10.191392 (7.824731) | 0.216157 / 0.680424 (-0.464266) | 0.025393 / 0.534201 (-0.508808) | 0.496743 / 0.579283 (-0.082540) | 0.575365 / 0.434364 (0.141002) | 0.559978 / 0.540337 (0.019641) | 0.677474 / 1.386936 (-0.709462) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008913 / 0.011353 (-0.002440) | 0.005540 / 0.011008 (-0.005469) | 0.100001 / 0.038508 (0.061493) | 0.034432 / 0.023109 (0.011323) | 0.419824 / 0.275898 (0.143926) | 0.443566 / 0.323480 (0.120086) | 0.006372 / 0.007986 (-0.001614) | 0.004405 / 0.004328 (0.000077) | 0.094927 / 0.004250 (0.090677) | 0.050300 / 0.037052 (0.013248) | 0.424806 / 0.258489 (0.166317) | 0.480793 / 0.293841 (0.186952) | 0.050869 / 0.128546 (-0.077677) | 0.015899 / 0.075646 (-0.059747) | 0.111413 / 0.419271 (-0.307859) | 0.058093 / 0.043533 (0.014560) | 0.430575 / 0.255139 (0.175436) | 0.483786 / 0.283200 (0.200586) | 0.106878 / 0.141683 (-0.034805) | 1.763576 / 1.452155 (0.311422) | 1.837750 / 1.492716 (0.345033) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011565 / 0.018006 (-0.006441) | 0.484411 / 0.000490 (0.483922) | 0.004869 / 0.000200 (0.004669) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030706 / 0.037411 (-0.006706) | 0.126901 / 0.014526 (0.112375) | 0.130367 / 0.176557 (-0.046190) | 0.206568 / 0.737135 (-0.530567) | 0.146505 / 0.296338 (-0.149834) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.627266 / 0.215209 (0.412057) | 6.314049 / 2.077655 (4.236394) | 2.582920 / 1.504120 (1.078800) | 2.249401 / 1.541195 (0.708206) | 2.244960 / 1.468490 (0.776470) | 0.907770 / 4.584777 (-3.677007) | 5.349622 / 3.745712 (1.603910) | 4.591244 / 5.269862 (-0.678618) | 2.301612 / 4.565676 (-2.264064) | 0.108813 / 0.424275 (-0.315462) | 0.013187 / 0.007607 (0.005580) | 0.806071 / 0.226044 (0.580027) | 7.843903 / 2.268929 (5.574974) | 3.405968 / 55.444624 (-52.038656) | 2.564301 / 6.876477 (-4.312176) | 2.652208 / 2.142072 (0.510135) | 1.168142 / 4.805227 (-3.637086) | 0.218551 / 6.500664 (-6.282113) | 0.078120 / 0.075469 (0.002651) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.562517 / 1.841788 (-0.279271) | 17.519325 / 8.074308 (9.445017) | 20.727083 / 10.191392 (10.535691) | 0.207135 / 0.680424 (-0.473288) | 0.028208 / 0.534201 (-0.505993) | 0.496157 / 0.579283 (-0.083126) | 0.569239 / 0.434364 (0.134875) | 0.566137 / 0.540337 (0.025799) | 0.704208 / 1.386936 (-0.682728) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5675 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5675/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5675/comments | https://api.github.com/repos/huggingface/datasets/issues/5675/events | https://github.com/huggingface/datasets/issues/5675 | 1,641,763,478 | I_kwDODunzps5h21KW | 5,675 | Filter datasets by language code | [] | closed | false | null | 4 | 2023-03-27T09:42:28Z | 2023-03-30T08:08:15Z | 2023-03-30T08:08:15Z | null | Hi! I use the language search field on https://huggingface.co/datasets
However, some of the datasets tagged by ISO language code are not accessible by this search form.
For example, [myv_ru_2022](https://huggingface.co/datasets/slone/myv_ru_2022) is has `myv` language tag but it is not included in Languages search form.
I've also noticed the same problem with `mhr` (see https://huggingface.co/datasets/AigizK/mari-russian-parallel-corpora) | {
"+1": 6,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5675/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5675/timeline | null | completed | null | null | false | [
"The dataset still can be found, if instead of using the search form you just enter the language code in the url, like https://huggingface.co/datasets?language=language:myv. \r\n\r\nBut of course having a more complete list of languages in the search form (or just a fallback to the language codes, if they are missing from the code=>language mapping) would be much more convenient!",
"Hi! I've opened a PR to make these languages searchable on the Hub.",
"Thanks @mariosasko!\r\nDo you think it is possible to turn this into a more scalable pipeline? Such as:\r\n1. Looping through all the datasets on the hub and collecting the set of all their language codes;\r\n2. Selecting the codes not covered yet in `Language.ts`\r\n3. Looking up their codes at https://iso639-3.sil.org/code_tables/639/data\r\n4. Adding all the newly found language codes to `Language.ts`",
"@avidale This has been discussed in https://github.com/huggingface/datasets/issues/4881, so also feel free to share your opinion there."
] |
https://api.github.com/repos/huggingface/datasets/issues/1363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1363/comments | https://api.github.com/repos/huggingface/datasets/issues/1363/events | https://github.com/huggingface/datasets/pull/1363 | 760,160,944 | MDExOlB1bGxSZXF1ZXN0NTM1MDM4NjM0 | 1,363 | Adding OPUS MultiUN | [] | closed | false | null | 0 | 2020-12-09T09:29:01Z | 2020-12-09T17:54:20Z | 2020-12-09T17:54:20Z | null | Adding UnMulti
http://www.euromatrixplus.net/multi-un/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1363/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1363/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1363.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1363",
"merged_at": "2020-12-09T17:54:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1363.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1363"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/149/comments | https://api.github.com/repos/huggingface/datasets/issues/149/events | https://github.com/huggingface/datasets/issues/149 | 619,735,739 | MDU6SXNzdWU2MTk3MzU3Mzk= | 149 | [Feature request] Add Ubuntu Dialogue Corpus dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 1 | 2020-05-17T15:42:39Z | 2020-05-18T17:01:46Z | 2020-05-18T17:01:46Z | null | https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/149/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/149/timeline | null | completed | null | null | false | [
"@AlphaMycelium the Ubuntu Dialogue Corpus [version 2]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator) is added. Note that it requires a manual download by following the download instructions in the [repos]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator).\r\nMaybe we can close this issue for now?"
] |
https://api.github.com/repos/huggingface/datasets/issues/2460 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2460/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2460/comments | https://api.github.com/repos/huggingface/datasets/issues/2460/events | https://github.com/huggingface/datasets/pull/2460 | 915,268,536 | MDExOlB1bGxSZXF1ZXN0NjY1MTAyMjA4 | 2,460 | Revert default in-memory for small datasets | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"closed_at": "2021-06-08T18:51:04Z",
"closed_issues": 2,
"created_at": "2021-04-20T16:49:16Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-06-08T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/4",
"id": 6680642,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/4/labels",
"node_id": "MDk6TWlsZXN0b25lNjY4MDY0Mg==",
"number": 4,
"open_issues": 0,
"state": "closed",
"title": "1.8",
"updated_at": "2021-06-08T18:51:37Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/4"
} | 1 | 2021-06-08T17:14:23Z | 2021-06-08T18:04:14Z | 2021-06-08T17:55:43Z | null | Close #2458 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2460/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2460/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2460.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2460",
"merged_at": "2021-06-08T17:55:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2460.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2460"
} | true | [
"Thank you for this welcome change guys!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2362 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2362/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2362/comments | https://api.github.com/repos/huggingface/datasets/issues/2362/events | https://github.com/huggingface/datasets/pull/2362 | 892,100,749 | MDExOlB1bGxSZXF1ZXN0NjQ0ODYzOTQw | 2,362 | Fix web_nlg metadata | [] | closed | false | null | 3 | 2021-05-14T17:15:07Z | 2021-05-17T13:44:17Z | 2021-05-17T13:42:28Z | null | Our metadata storage system does not support `.` inside keys. cc @Pierrci
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2362/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2362/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2362.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2362",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2362.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2362"
} | true | [
"Hi ! `release_v2.1` and the others are dataset configuration names.\r\n\r\nThe configuration names are used to show the right code snippet in the UI to load the dataset.\r\nFor example if the parsing of the web_nlg tags worked correctly we would have:\r\n\r\n\r\nTherefore I don't think it's a good idea to rename the configurations from `release_v2.1` to `release_v2_1` as the code snippet would be wrong in this case.\r\n\r\nMoreover we can't really disallow dots in configuration names and rename the configurations since it would be a big breaking change. It's commonly used, especially with multilingual datasets. For example `load_dataset(\"indic_glue\", \"sna.bn\")`.\r\n\r\nIs this something that can be fixed on the moonlanding side instead ?",
"> Is this something that can be fixed on the moonlanding side instead ?\r\n\r\nNot really unless we change database:)\r\n\r\nWe'll maybe try to find another workaround, but super low-prio given that it's the only dataset that has those dotted keys in the YAML metadata",
"Ok, should we close this PR then ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/2909 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2909/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2909/comments | https://api.github.com/repos/huggingface/datasets/issues/2909/events | https://github.com/huggingface/datasets/pull/2909 | 996,002,180 | PR_kwDODunzps4rutdo | 2,909 | fix anli splits | [] | closed | false | null | 0 | 2021-09-14T13:10:35Z | 2021-10-13T11:27:49Z | 2021-10-13T11:27:49Z | null | I can't run the tests for dummy data, facing this error
`ImportError while loading conftest '/home/zaid/tmp/fix_anli_splits/datasets/tests/conftest.py'.
tests/conftest.py:10: in <module>
from datasets import config
E ImportError: cannot import name 'config' from 'datasets' (unknown location)` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2909/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2909/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2909.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2909",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2909.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2909"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/844/comments | https://api.github.com/repos/huggingface/datasets/issues/844/events | https://github.com/huggingface/datasets/pull/844 | 741,835,661 | MDExOlB1bGxSZXF1ZXN0NTIwMDgwNzM5 | 844 | add newlines to amazon desc | [] | closed | false | null | 0 | 2020-11-12T18:41:20Z | 2020-11-12T18:42:25Z | 2020-11-12T18:42:21Z | null | Just a quick formatting fix to hopefully make it render nicer on Viewer | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/844/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/844/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/844.diff",
"html_url": "https://github.com/huggingface/datasets/pull/844",
"merged_at": "2020-11-12T18:42:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/844.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/844"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3544 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3544/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3544/comments | https://api.github.com/repos/huggingface/datasets/issues/3544/events | https://github.com/huggingface/datasets/issues/3544 | 1,095,784,681 | I_kwDODunzps5BUFjp | 3,544 | Ability to split a dataset in multiple files. | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 0 | 2022-01-06T23:02:25Z | 2022-01-06T23:02:25Z | null | null | Hello,
**Is your feature request related to a problem? Please describe.**
My use case is that I have one writer that adds columns and multiple workers reading the same `Dataset`. Each worker should have access to columns added by the writer when they reload the dataset.
I understand that we shouldn't overwrite an arrow file as this could cause Segfault and so on. Before 1.16, I was able to overwrite the dataset and that would work most of the time with some retries.
**Describe the solution you'd like**
I was thinking that if we could append `Dataset._data_files`, when the workers reload the Dataset, they would get the new columns.
**Describe alternatives you've considered**
I currently need to
1. Save multiple "versions" of the dataset and load the latest.
2. Try working with cache files to get the latest columns.
**Additional context**
I think this would be a great addition to HFDataset as Parquet supports multi-files input out of the box!
I can make a PR myself with some pointers as needed :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3544/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3544/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/561 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/561/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/561/comments | https://api.github.com/repos/huggingface/datasets/issues/561/events | https://github.com/huggingface/datasets/pull/561 | 690,871,415 | MDExOlB1bGxSZXF1ZXN0NDc3Njk1NDQy | 561 | Made `share_dataset` more readable | [] | closed | false | null | 0 | 2020-09-02T09:34:48Z | 2020-09-03T09:00:30Z | 2020-09-03T09:00:29Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/561/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/561/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/561.diff",
"html_url": "https://github.com/huggingface/datasets/pull/561",
"merged_at": "2020-09-03T09:00:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/561.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/561"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/352 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/352/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/352/comments | https://api.github.com/repos/huggingface/datasets/issues/352/events | https://github.com/huggingface/datasets/pull/352 | 653,128,883 | MDExOlB1bGxSZXF1ZXN0NDQ2MTA1Mjky | 352 | 🐛[BugFix]fix seqeval | [] | closed | false | null | 7 | 2020-07-08T09:12:12Z | 2020-07-16T08:26:46Z | 2020-07-16T08:26:46Z | null | Fix seqeval process labels such as 'B', 'B-ARGM-LOC' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/352/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/352/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/352.diff",
"html_url": "https://github.com/huggingface/datasets/pull/352",
"merged_at": "2020-07-16T08:26:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/352.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/352"
} | true | [
"I think this is good but can you detail a bit the behavior before and after your fix?",
"examples:\r\n\r\ninput: `['B', 'I', 'I', 'O', 'B', 'I']`\r\nbefore: `[('B', 0, 0), ('I', 1, 2), ('B', 4, 4), ('I', 5, 5)]`\r\nafter: `[('_', 0, 2), ('_', 4, 5)]`\r\n\r\ninput: `['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'O', 'B-ARGM-TIME', 'I-ARGM-TIME']`\r\nbefore: `[('LOC', 0, 2), ('TIME', 4, 5)]`\r\nafter: `[('ARGM-LOC', 0, 2), ('ARGM-TIME', 4, 5)]`\r\n\r\nThis is my test code:\r\n\r\n```python\r\nfrom metrics.seqeval.seqeval import end_of_chunk, start_of_chunk\r\n\r\n\r\ndef before_get_entities(seq, suffix=False):\r\n \"\"\"Gets entities from sequence.\r\n Args:\r\n seq (list): sequence of labels.\r\n Returns:\r\n list: list of (chunk_type, chunk_start, chunk_end).\r\n \"\"\"\r\n if any(isinstance(s, list) for s in seq):\r\n seq = [item for sublist in seq for item in sublist + ['O']]\r\n\r\n prev_tag = 'O'\r\n prev_type = ''\r\n begin_offset = 0\r\n chunks = []\r\n for i, chunk in enumerate(seq + ['O']):\r\n if suffix:\r\n tag = chunk[-1]\r\n type_ = chunk.split('-')[0]\r\n else:\r\n tag = chunk[0]\r\n type_ = chunk.split('-')[-1]\r\n\r\n if end_of_chunk(prev_tag, tag, prev_type, type_):\r\n chunks.append((prev_type, begin_offset, i - 1))\r\n if start_of_chunk(prev_tag, tag, prev_type, type_):\r\n begin_offset = i\r\n prev_tag = tag\r\n prev_type = type_\r\n\r\n return chunks\r\n\r\n\r\ndef after_get_entities(seq, suffix=False):\r\n \"\"\"Gets entities from sequence.\r\n Args:\r\n seq (list): sequence of labels.\r\n Returns:\r\n list: list of (chunk_type, chunk_start, chunk_end).\r\n \"\"\"\r\n if any(isinstance(s, list) for s in seq):\r\n seq = [item for sublist in seq for item in sublist + ['O']]\r\n\r\n prev_tag = 'O'\r\n prev_type = ''\r\n begin_offset = 0\r\n chunks = []\r\n for i, chunk in enumerate(seq + ['O']):\r\n if suffix:\r\n tag = chunk[-1]\r\n type_ = chunk[:-1].rsplit('-', maxsplit=1)[0] or '_'\r\n else:\r\n tag = chunk[0]\r\n type_ = chunk[1:].split('-', maxsplit=1)[-1] or '_'\r\n\r\n if end_of_chunk(prev_tag, tag, prev_type, type_):\r\n chunks.append((prev_type, begin_offset, i - 1))\r\n if start_of_chunk(prev_tag, tag, prev_type, type_):\r\n begin_offset = i\r\n prev_tag = tag\r\n prev_type = type_\r\n\r\n return chunks\r\n\r\n\r\ndef main():\r\n examples_1 = ['B', 'I', 'I', 'O', 'B', 'I']\r\n print(before_get_entities(examples_1))\r\n print(after_get_entities(examples_1))\r\n examples_2 = ['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'O', 'B-ARGM-TIME', 'I-ARGM-TIME']\r\n print(before_get_entities(examples_2))\r\n print(after_get_entities(examples_2))\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```",
"And we can get more examples not correct, such as:\r\n\r\ninput: `['B', 'I', 'I-I']`\r\nbefore: `[('B', 0, 0), ('I', 1, 2)]`\r\nafter: `[('_', 0, 1), ('I', 2, 2)]`\r\n\r\ninput: `['B-ARGM-TIME', 'I-ARGM-TIME', 'I-TIME']`\r\nbefore: `[('TIME', 0, 2)]`\r\nafter: `[('ARGM-TIME', 0, 1), ('TIME', 2, 2)]`",
"I think i didn't break any thing. Maybe the checks should be restart?",
"Could you please rebase from master @AlongWY ? This should fix the CI stuff",
"ok, i will do it",
"Indeed the official repo is quite stale. Let's merge it here, thanks @AlongWY "
] |
https://api.github.com/repos/huggingface/datasets/issues/3095 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3095/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3095/comments | https://api.github.com/repos/huggingface/datasets/issues/3095/events | https://github.com/huggingface/datasets/issues/3095 | 1,027,453,146 | I_kwDODunzps49PbDa | 3,095 | `cast_column` makes audio decoding fail | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-10-15T13:36:58Z | 2023-04-07T09:43:20Z | 2021-10-15T15:38:30Z | null | ## Describe the bug
After changing the sampling rate automatic decoding fails.
## Steps to reproduce the bug
```python
from datasets import load_dataset
import datasets
ds = load_dataset("common_voice", "ab", split="train")
ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000))
print(ds[0]["audio"]) # <- this fails currently
```
yields:
```
TypeError: forward() takes 2 positional arguments but 4 were given
```
## Expected results
no failure
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 1.13.2 (master)
- Platform: Linux-5.11.0-1019-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3095/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3095/timeline | null | completed | null | null | false | [
"cc @anton-l @albertvillanova ",
"Thanks for reporting, @patrickvonplaten.\r\n\r\nI think the issue is related to mp3 resampling, not to `cast_column`.\r\n\r\nYou can check that `cast_column` works OK with non-mp3 audio files:\r\n```python\r\nfrom datasets import load_dataset\r\nimport datasets\r\nds = load_dataset(\"arabic_speech_corpus\", split=\"train\")\r\nds = ds.cast_column(\"audio\", datasets.features.Audio(sampling_rate=16_000))\r\nprint(ds[0][\"audio\"])\r\n```\r\n\r\nI'm fixing it."
] |
https://api.github.com/repos/huggingface/datasets/issues/4637 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4637/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4637/comments | https://api.github.com/repos/huggingface/datasets/issues/4637/events | https://github.com/huggingface/datasets/issues/4637 | 1,294,818,236 | I_kwDODunzps5NLVu8 | 4,637 | The "all" split breaks streaming | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 6 | 2022-07-05T21:56:49Z | 2022-07-15T13:59:30Z | null | null | ## Describe the bug
Not sure if this is a bug or just the way streaming works, but setting `streaming=True` did not work when setting `split="all"`
## Steps to reproduce the bug
The following works:
```python
ds = load_dataset('super_glue', 'wsc.fixed', split='all')
```
The following throws `ValueError: Bad split: all. Available splits: ['train', 'validation', 'test']`:
```python
ds = load_dataset('super_glue', 'wsc.fixed', split='all', streaming=True)
```
## Expected results
An iterator over all splits.
## Actual results
I had to do the following to achieve the desired result:
```python
from itertools import chain
ds = load_dataset('super_glue', 'wsc.fixed', streaming=True)
it = chain.from_iterable(ds.values())
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31
- Python version: 3.10.5
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4637/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4637/timeline | null | null | null | null | false | [
"Thanks for reporting @cakiki.\r\n\r\nYes, this is a bug. We are investigating it.",
"@albertvillanova Nice! Let me know if it's something I can fix my self; would love to contribtue!",
"@cakiki I was working on this but if you would like to contribute, go ahead. I will close my PR. ;)\r\n\r\nFor the moment I just pushed the test (to see if it impacts other tests).",
"It impacted the test `test_generator_based_download_and_prepare` and I have fixed this.\r\n\r\nSo that you can copy the test I implemented in my PR and then implement a fix for this issue that passes the test `tests/test_builder.py::test_builder_as_streaming_dataset`.",
"Hi @cakiki are you still interested in working on this? Are you planning to open a PR?",
"Hi @albertvillanova ! Sorry it took so long; I wanted to spend this weekend working on it."
] |
https://api.github.com/repos/huggingface/datasets/issues/3132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3132/comments | https://api.github.com/repos/huggingface/datasets/issues/3132/events | https://github.com/huggingface/datasets/issues/3132 | 1,032,505,430 | I_kwDODunzps49ishW | 3,132 | Support Audio feature in streaming mode | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2021-10-21T13:32:18Z | 2021-11-12T14:13:04Z | 2021-11-12T14:13:04Z | null | Currently, Audio feature is only supported for non-streaming datasets.
Due to the large size of many speech datasets, we should also support Audio feature in streaming mode.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3132/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3132/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2874/comments | https://api.github.com/repos/huggingface/datasets/issues/2874/events | https://github.com/huggingface/datasets/pull/2874 | 989,685,328 | MDExOlB1bGxSZXF1ZXN0NzI4Mzg2Mjg4 | 2,874 | Support streaming datasets that use pathlib | [] | closed | false | null | 3 | 2021-09-07T07:35:49Z | 2021-09-07T18:25:22Z | 2021-09-07T11:41:15Z | null | This PR extends the support in streaming mode for datasets that use `pathlib.Path`.
Related to: #2866.
CC: @severo | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2874/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2874/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2874.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2874",
"merged_at": "2021-09-07T11:41:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2874.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2874"
} | true | [
"I've tried https://github.com/huggingface/datasets/issues/2866 again, and I get the same error.\r\n\r\n```python\r\nimport datasets as ds\r\nds.load_dataset('counter', split=\"train\", streaming=False)\r\n```",
"@severo Issue #2866 is not fully fixed yet: multiple patches need to be implemented for `pathlib`, as that dataset uses quite a lot of `pathlib` functions... 😅 ",
"No worry and no stress, I just wanted to check for that case :) I'm very happy that you're working on issues I'm interested in!"
] |
https://api.github.com/repos/huggingface/datasets/issues/5106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5106/comments | https://api.github.com/repos/huggingface/datasets/issues/5106/events | https://github.com/huggingface/datasets/pull/5106 | 1,406,635,758 | PR_kwDODunzps5ArM6G | 5,106 | Fix task template reload from dict | [] | closed | false | null | 2 | 2022-10-12T18:33:49Z | 2022-10-13T09:59:07Z | 2022-10-13T09:56:51Z | null | Since #4926 the JSON dumps are simplified and it made task template dicts empty by default.
I fixed this by always including the task name which is needed to reload a task from a dict | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5106/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5106/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5106.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5106",
"merged_at": "2022-10-13T09:56:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5106.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5106"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Just wondering if there might be other data classes default values missed that could cause an issue... Apart from feature-like classes and tasks, I don't see any others though...\r\n\r\nI think we're good ! `asdict` is used on the DatasetInfo attributes like features, tasks etc. and they all support dict conversion properly now\r\n\r\n> And a question: but this information about the tasks is no longer being saved as YAML tags in the dataset card; won't be a problem with current datasets using task templates (with this information in their metadata JSON) once we replace the JSON by the YAML tags (which do not have this information about the task templates)?\r\n\r\nIn the long run we'll use the train_eval_index YAML tags instead, but I agree when removing the JSON files we should try to not break existing code that may rely on this"
] |
https://api.github.com/repos/huggingface/datasets/issues/1632 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1632/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1632/comments | https://api.github.com/repos/huggingface/datasets/issues/1632/events | https://github.com/huggingface/datasets/issues/1632 | 774,388,625 | MDU6SXNzdWU3NzQzODg2MjU= | 1,632 | SICK dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 0 | 2020-12-24T12:40:14Z | 2021-02-05T15:49:25Z | 2021-02-05T15:49:25Z | null | Hi, this would be great to have this dataset included. I might be missing something, but I could not find it in the list of already included datasets. Thank you.
## Adding a Dataset
- **Name:** SICK
- **Description:** SICK consists of about 10,000 English sentence pairs that include many examples of the lexical, syntactic, and semantic phenomena.
- **Paper:** https://www.aclweb.org/anthology/L14-1314/
- **Data:** http://marcobaroni.org/composes/sick.html
- **Motivation:** This dataset is well-known in the NLP community used for recognizing entailment between sentences.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1632/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1632/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4012 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4012/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4012/comments | https://api.github.com/repos/huggingface/datasets/issues/4012/events | https://github.com/huggingface/datasets/pull/4012 | 1,180,350,083 | PR_kwDODunzps40_qgo | 4,012 | Rename wer to cer | [] | closed | false | null | 0 | 2022-03-25T05:06:05Z | 2022-03-28T13:57:25Z | 2022-03-28T13:57:25Z | null | wer variable changed to cer in README file
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4012/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4012/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4012.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4012",
"merged_at": "2022-03-28T13:57:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4012.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4012"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3695 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3695/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3695/comments | https://api.github.com/repos/huggingface/datasets/issues/3695/events | https://github.com/huggingface/datasets/pull/3695 | 1,129,730,148 | PR_kwDODunzps4yXP44 | 3,695 | Fix ClassLabel to/from dict when passed names_file | [] | closed | false | null | 0 | 2022-02-10T09:47:10Z | 2022-02-11T23:02:32Z | 2022-02-11T23:02:31Z | null | Currently, `names_file` is a field of the data class `ClassLabel`, thus appearing when transforming it to dict (when saving infos). Afterwards, when trying to read it from infos, it conflicts with the other field `names`.
This PR, removes `names_file` as a field of the data class `ClassLabel`.
- it is only used at instantiation to generate the `labels` field
Fix #3631. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3695/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3695/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3695.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3695",
"merged_at": "2022-02-11T23:02:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3695.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3695"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4834/comments | https://api.github.com/repos/huggingface/datasets/issues/4834/events | https://github.com/huggingface/datasets/pull/4834 | 1,336,993,511 | PR_kwDODunzps49FJOu | 4,834 | Fix documentation card of recipe_nlg dataset | [] | closed | false | null | 1 | 2022-08-12T09:49:39Z | 2022-08-12T11:28:18Z | 2022-08-12T11:13:40Z | null | Fix documentation card of recipe_nlg dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4834/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4834/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4834.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4834",
"merged_at": "2022-08-12T11:13:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4834.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4834"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.