id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,194,579,257
https://api.github.com/repos/huggingface/datasets/issues/4109
https://github.com/huggingface/datasets/pull/4109
4,109
Add Spearmanr Metric Card
closed
3
2022-04-06T12:57:53
2022-05-03T16:50:26
2022-05-03T16:43:37
emibaylor
[]
null
true
1,194,578,584
https://api.github.com/repos/huggingface/datasets/issues/4108
https://github.com/huggingface/datasets/pull/4108
4,108
Perplexity Speedup
closed
7
2022-04-06T12:57:21
2022-04-20T13:00:54
2022-04-20T12:54:42
emibaylor
[]
This PR makes necessary changes to perplexity such that: - it runs much faster (via batching) - it throws an error when input is empty, or when input is one word without <BOS> token - it adds the option to add a <BOS> token Issues: - The values returned are extremely high, and I'm worried they aren't correct. Even if they are correct, they are sometimes returned as `inf`, which is not very useful (see [comment below](https://github.com/huggingface/datasets/pull/4108#discussion_r843931094) for some of the output values). - If the values are not correct, can you help me find the error? - If the values are correct, it might be worth it to measure something like perplexity per word, which would allow us to get actual values for the larger perplexities, instead of just `inf` Future: - `stride` is not currently implemented here. I have some thoughts on how to make it happen with batching, but I think it would be better to get another set of eyes to look at any possible errors causing such large values now rather than later.
true
1,194,484,885
https://api.github.com/repos/huggingface/datasets/issues/4107
https://github.com/huggingface/datasets/issues/4107
4,107
Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows
closed
5
2022-04-06T11:37:15
2022-04-08T07:13:07
2022-04-06T14:39:55
Pavithree
[ "bug" ]
## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows **Link:** *https://huggingface.co/datasets/Pavithree/explainLikeImFive* *This is the subset of original eli5 dataset https://huggingface.co/datasets/vblagoje/lfqa. I just filtered the data samples which belongs to one particular subreddit thread. However, the dataset preview for train split returns the below mentioned error: Status code: 400 Exception: ArrowInvalid Message: Exceeded maximum rows When I try to load the same dataset it returns ArrowInvalid: Exceeded maximum rows error* Am I the one who added this dataset ? Yes
false
1,194,393,892
https://api.github.com/repos/huggingface/datasets/issues/4106
https://github.com/huggingface/datasets/pull/4106
4,106
Support huggingface_hub 0.5
closed
14
2022-04-06T10:15:25
2022-04-08T10:28:43
2022-04-08T10:22:23
lhoestq
[]
Following https://github.com/huggingface/datasets/issues/4105 `huggingface_hub` deprecated some parameters in `HfApi` in 0.5. This PR updates all the calls to HfApi to remove all the deprecations, <s>and I set the `hugginface_hub` requirement to `>=0.5.0`</s> cc @adrinjalali @LysandreJik
true
1,194,297,119
https://api.github.com/repos/huggingface/datasets/issues/4105
https://github.com/huggingface/datasets/issues/4105
4,105
push to hub fails with huggingface-hub 0.5.0
closed
5
2022-04-06T08:59:57
2022-04-13T14:30:47
2022-04-13T14:30:47
frascuchon
[ "bug" ]
## Describe the bug `ds.push_to_hub` is failing when updating a dataset in the form "org_id/repo_id" ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("rubrix/news_test") ds.push_to_hub("<your-user>/news_test", token="<your-token>") ``` ## Expected results The dataset is successfully uploaded ## Actual results An error validation is raised: ```bash if repo_id and (name or organization): > raise ValueError( "Only pass `repo_id` and leave deprecated `name` and " "`organization` to be None." E ValueError: Only pass `repo_id` and leave deprecated `name` and `organization` to be None. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.1 - `huggingface-hub`: 0.5 - Platform: macOS - Python version: 3.8.12 - PyArrow version: 6.0.0 cc @adrinjalali
false
1,194,072,966
https://api.github.com/repos/huggingface/datasets/issues/4104
https://github.com/huggingface/datasets/issues/4104
4,104
Add time series data - stock market
open
10
2022-04-06T05:46:58
2024-07-21T16:54:30
null
rozeappletree
[ "dataset request" ]
## Adding a Time Series Dataset - **Name:** 2min ticker data for stock market - **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image - **Data:** Collected by myself from investing.com - **Motivation:** Test applicability of transformer based model on stock market / time series problem ![image](https://user-images.githubusercontent.com/45640029/161904077-52fe97cb-3720-4e3f-98ee-7f6720a056e2.png)
false
1,193,987,104
https://api.github.com/repos/huggingface/datasets/issues/4103
https://github.com/huggingface/datasets/pull/4103
4,103
Add the `GSM8K` dataset
closed
2
2022-04-06T04:07:52
2022-04-12T15:38:28
2022-04-12T10:21:16
jon-tow
[]
null
true
1,193,616,722
https://api.github.com/repos/huggingface/datasets/issues/4102
https://github.com/huggingface/datasets/pull/4102
4,102
[hub] Fix `api.create_repo` call?
closed
2
2022-04-05T19:21:52
2023-09-24T10:01:14
2022-04-12T08:41:46
julien-c
[]
null
true
1,193,399,204
https://api.github.com/repos/huggingface/datasets/issues/4101
https://github.com/huggingface/datasets/issues/4101
4,101
How can I download only the train and test split for full numbers using load_dataset()?
open
1
2022-04-05T16:00:15
2022-04-06T13:09:01
null
Nakkhatra
[ "enhancement" ]
How can I download only the train and test split for full numbers using load_dataset()? I do not need the extra split and it will take 40 mins just to download in Colab. I have very short time in hand. Please help.
false
1,193,393,959
https://api.github.com/repos/huggingface/datasets/issues/4100
https://github.com/huggingface/datasets/pull/4100
4,100
Improve RedCaps dataset card
closed
2
2022-04-05T15:57:14
2022-04-13T14:08:54
2022-04-13T14:02:26
mariosasko
[]
This PR modifies the RedCaps card to: * fix the formatting of the Point of Contact fields on the Hub * speed up the image fetching logic (aligns it with the [img2dataset](https://github.com/rom1504/img2dataset) tool) and make it more robust (return None if **any** exception is thrown)
true
1,193,253,768
https://api.github.com/repos/huggingface/datasets/issues/4099
https://github.com/huggingface/datasets/issues/4099
4,099
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)
closed
3
2022-04-05T14:42:38
2022-04-06T06:37:44
2022-04-06T06:35:54
andreybond
[ "bug" ]
## Describe the bug Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset datasets = load_dataset("nielsr/XFUN", "xfun.ja") ``` ## Expected results Dataset should be downloaded without exceptions ## Actual results Stack trace (for the second-time execution): Downloading and preparing dataset xfun/xfun.ja to /root/.cache/huggingface/datasets/nielsr___xfun/xfun.ja/0.0.0/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477... Downloading data files: 100% 2/2 [00:00<00:00, 88.48it/s] Extracting data files: 100% 2/2 [00:00<00:00, 79.60it/s] UnicodeDecodeErrorTraceback (most recent call last) <ipython-input-31-79c26bd1109c> in <module> 1 from datasets import load_dataset 2 ----> 3 datasets = load_dataset("nielsr/XFUN", "xfun.ja") /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 604 ) 605 --> 606 # By default, return all splits 607 if split is None: 608 split = {s: s for s in self.info.splits} /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos) /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 692 Args: 693 split: `datasets.Split` which subset of the data to read. --> 694 695 Returns: 696 `Dataset` /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys) /usr/local/lib/python3.6/dist-packages/tqdm/notebook.py in __iter__(self) 252 if not self.disable: 253 self.display(check_delay=False) --> 254 255 def __iter__(self): 256 try: /usr/local/lib/python3.6/dist-packages/tqdm/std.py in __iter__(self) 1183 for obj in iterable: 1184 yield obj -> 1185 return 1186 1187 mininterval = self.mininterval ~/.cache/huggingface/modules/datasets_modules/datasets/nielsr--XFUN/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477/XFUN.py in _generate_examples(self, filepaths) 140 logger.info("Generating examples from = %s", filepath) 141 with open(filepath[0], "r") as f: --> 142 data = json.load(f) 143 144 for doc in data["documents"]: /usr/lib/python3.6/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 294 295 """ --> 296 return loads(fp.read(), 297 cls=cls, object_hook=object_hook, 298 parse_float=parse_float, parse_int=parse_int, /usr/lib/python3.6/encodings/ascii.py in decode(self, input, final) 24 class IncrementalDecoder(codecs.IncrementalDecoder): 25 def decode(self, input, final=False): ---> 26 return codecs.ascii_decode(input, self.errors)[0] 27 28 class StreamWriter(Codec,codecs.StreamWriter): UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 (but reproduced with many previous versions) - Platform: Docker: Linux da5b74136d6b 5.3.0-1031-azure #32~18.04.1-Ubuntu SMP Mon Jun 22 15:27:23 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux ; Base docker image is : huggingface/transformers-pytorch-cpu - Python version: 3.6.9 - PyArrow version: 6.0.1
false
1,193,245,522
https://api.github.com/repos/huggingface/datasets/issues/4098
https://github.com/huggingface/datasets/pull/4098
4,098
Proposing WikiSplit metric card
closed
3
2022-04-05T14:36:34
2022-10-11T09:10:21
2022-04-05T15:42:28
sashavor
[]
Pinging @lhoestq to ensure that my distinction between the dataset and the metric are clear :sweat_smile:
true
1,193,205,751
https://api.github.com/repos/huggingface/datasets/issues/4097
https://github.com/huggingface/datasets/pull/4097
4,097
Updating FrugalScore metric card
closed
1
2022-04-05T14:09:24
2022-04-05T15:07:35
2022-04-05T15:01:46
sashavor
[]
removing duplicate paragraph
true
1,193,165,229
https://api.github.com/repos/huggingface/datasets/issues/4096
https://github.com/huggingface/datasets/issues/4096
4,096
Add support for streaming Zarr stores for hosted datasets
closed
11
2022-04-05T13:38:32
2023-12-07T09:01:49
2022-04-21T08:12:58
jacobbieker
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr stores are designed to be easily streamed in from cloud storage, especially with xarray and fsspec. Since geospatial data tends to be very large, and on the order of TBs of data or 10's of TBs of data for a single dataset, it can be difficult to store the dataset locally for users. Just adding Zarr stores with HF git doesn't work well (see https://github.com/huggingface/datasets/issues/3823) as Zarr splits the data into lots of small chunks for fast loading, and that doesn't work well with git. I've somewhat gotten around that issue by tarring each Zarr store and uploading them as a single file, which seems to be working (see https://huggingface.co/datasets/openclimatefix/gfs-reforecast for example data files, although the script isn't written yet). This does mean that streaming doesn't quite work though. On the other hand, in https://huggingface.co/datasets/openclimatefix/eumetsat_uk_hrv we stream in a Zarr store from a public GCP bucket quite easily. **Describe the solution you'd like** A way to upload Zarr stores for hosted datasets so that we can stream it with xarray and fsspec. **Describe alternatives you've considered** Tarring each Zarr store individually and just extracting them in the dataset script -> Downside this is a lot of data that probably doesn't fit locally for a lot of potential users. Pre-prepare examples in a format like Parquet -> Would use a lot more storage, and a lot less flexibility, in the eumetsat_uk_hrv, we use the one Zarr store for multiple different configurations.
false
1,192,573,353
https://api.github.com/repos/huggingface/datasets/issues/4095
https://github.com/huggingface/datasets/pull/4095
4,095
fix typo in rename_column error message
closed
1
2022-04-05T03:55:56
2022-04-05T08:54:46
2022-04-05T08:45:53
hunterlang
[]
I feel bad submitting such a tiny change as a PR but it confused me today 😄
true
1,192,534,414
https://api.github.com/repos/huggingface/datasets/issues/4094
https://github.com/huggingface/datasets/issues/4094
4,094
Helo Mayfrends
closed
0
2022-04-05T02:42:57
2022-04-05T07:16:42
2022-04-05T07:16:42
Budigming
[ "dataset request" ]
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,192,523,161
https://api.github.com/repos/huggingface/datasets/issues/4093
https://github.com/huggingface/datasets/issues/4093
4,093
elena-soare/crawled-ecommerce: missing dataset
closed
3
2022-04-05T02:25:19
2022-04-12T09:34:53
2022-04-12T09:34:53
seevaratnam
[ "dataset-viewer" ]
elena-soare/crawled-ecommerce **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
false
1,192,499,903
https://api.github.com/repos/huggingface/datasets/issues/4092
https://github.com/huggingface/datasets/pull/4092
4,092
Fix dataset `amazon_us_reviews` metadata - 4/4/2022
closed
2
2022-04-05T01:39:45
2022-04-08T12:35:41
2022-04-08T12:29:31
trentonstrong
[]
Fixes #4048 by running `dataset-cli test` to reprocess data and regenerate metadata. Additionally I've updated the README to include up-to-date counts for the subsets.
true
1,192,023,855
https://api.github.com/repos/huggingface/datasets/issues/4091
https://github.com/huggingface/datasets/issues/4091
4,091
Build a Dataset One Example at a Time Without Loading All Data Into Memory
closed
2
2022-04-04T16:19:24
2022-04-20T14:31:00
2022-04-20T14:31:00
aravind-tonita
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** I have a very large dataset stored on disk in a custom format. I have some custom code that reads one data example at a time and yields it in the form of a dictionary. I want to construct a `Dataset` with all examples, and then save it to disk. I later want to load the saved `Dataset` and use it like any other HuggingFace dataset, get splits, wrap it in a PyTorch `DataLoader`, etc. **Crucially, I do not ever want to materialize all the data in memory while building the dataset.** **Describe the solution you'd like** I would like to be able to do something like the following. Notice how each example is read and then immediately added to the dataset. We do not store all the data in memory when constructing the `Dataset`. If it helps, I will know the schema of my dataset before hand. ``` # Initialize an empty Dataset, possibly from a known schema. dataset = Dataset() # Read in examples one by one using a custom data streamer. for example_dict in custom_example_dict_streamer("/path/to/raw/data"): # Add this example to the dict but do not store it in memory. dataset.add_item(example_dict) # Save the final dataset to disk as an Arrow-backed dataset. dataset.save_to_disk("/path/to/dataset") ... # I'd like to be able to later `load_from_disk` and use the loaded Dataset # just like any other memory-mapped pyarrow-backed HuggingFace dataset... loaded_dataset = Dataset.load_from_disk("/path/to/dataset") loaded_dataset.set_format(type="torch", columnns=["foo", "bar", "baz"]) dataloader = torch.utils.data.DataLoader(loaded_dataset, batch_size=16) ... ``` **Describe alternatives you've considered** I initially tried to read all the data into memory, construct a Pandas DataFrame and then call `Dataset.from_pandas`. This would not work as it requires storing all the data in memory. It seems that there is an `add_item` method already -- I tried to implement something like the desired API written above, but I've not been able to initialize an empty `Dataset` (this seems to require several layers of constructing `datasets.table.Table` which requires constructing a `pyarrow.lib.Table`, etc). I also considered writing my data to multiple sharded CSV files or JSON files and then using `from_csv` or `from_json`. I'd prefer not to do this because (1) I'd prefer to avoid the intermediate step of creating these temp CSV/JSON files and (2) I'm not sure if `from_csv` and `from_json` use memory-mapping. Do you have any suggestions on how I'd be able to achieve this use case? Does something already exist to support this? Thank you very much in advance!
false
1,191,956,734
https://api.github.com/repos/huggingface/datasets/issues/4090
https://github.com/huggingface/datasets/pull/4090
4,090
Avoid writing empty license files
closed
1
2022-04-04T15:23:37
2022-04-07T12:46:45
2022-04-07T12:40:43
albertvillanova
[]
This PR avoids the creation of empty `LICENSE` files.
true
1,191,915,196
https://api.github.com/repos/huggingface/datasets/issues/4089
https://github.com/huggingface/datasets/pull/4089
4,089
Create metric card for Frugal Score
closed
1
2022-04-04T14:53:49
2022-04-05T14:14:46
2022-04-05T14:06:50
sashavor
[]
Proposing metric card for Frugal Score. @albertvillanova or @lhoestq -- there are certain aspects that I'm not 100% sure on (such as how exactly the distillation between BertScore and FrugalScore is done) -- so if you find that something isn't clear, please let me know!
true
1,191,901,172
https://api.github.com/repos/huggingface/datasets/issues/4088
https://github.com/huggingface/datasets/pull/4088
4,088
Remove unused legacy Beam utils
closed
1
2022-04-04T14:43:51
2022-04-05T15:23:27
2022-04-05T15:17:41
albertvillanova
[]
This PR removes unused legacy custom `WriteToParquet`, once official Apache Beam includes the patch since version 2.22.0: - Patch PR: https://github.com/apache/beam/pull/11699 - Issue: https://issues.apache.org/jira/browse/BEAM-10022 In relation with: - #204
true
1,191,819,805
https://api.github.com/repos/huggingface/datasets/issues/4087
https://github.com/huggingface/datasets/pull/4087
4,087
Fix BeamWriter output Parquet file
closed
1
2022-04-04T13:46:50
2022-04-05T15:00:40
2022-04-05T14:54:48
albertvillanova
[]
Since now, the `BeamWriter` saved a Parquet file with a simplified schema, where each field value was serialized to JSON. That resulted in Parquet files larger than Arrow files. This PR: - writes Parquet file preserving original schema and without serialization, thus avoiding serialization overhead and resulting in a smaller output file size. - fixes `parquet_to_arrow` function
true
1,191,373,374
https://api.github.com/repos/huggingface/datasets/issues/4086
https://github.com/huggingface/datasets/issues/4086
4,086
Dataset viewer issue for McGill-NLP/feedbackQA
closed
2
2022-04-04T07:27:20
2022-04-04T22:29:53
2022-04-04T08:01:45
cslizc
[ "dataset-viewer" ]
## Dataset viewer issue for '*McGill-NLP/feedbackQA*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/McGill-NLP/feedbackQA)* *short description of the issue* The dataset can be loaded correctly with `load_dataset` but the preview doesn't work. Error message: ``` Status code: 400 Exception: Status400Error Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist. ``` Am I the one who added this dataset ? Yes
false
1,190,621,345
https://api.github.com/repos/huggingface/datasets/issues/4085
https://github.com/huggingface/datasets/issues/4085
4,085
datasets.set_progress_bar_enabled(False) not working in datasets v2
closed
3
2022-04-02T12:40:10
2022-09-17T02:18:03
2022-04-04T06:44:34
virilo
[ "bug" ]
## Describe the bug datasets.set_progress_bar_enabled(False) not working in datasets v2 ## Steps to reproduce the bug ```python datasets.set_progress_bar_enabled(False) ``` ## Expected results datasets not using any progress bar ## Actual results AttributeError: module 'datasets' has no attribute 'set_progress_bar_enabled ## Environment info datasets version 2
false
1,190,060,415
https://api.github.com/repos/huggingface/datasets/issues/4084
https://github.com/huggingface/datasets/issues/4084
4,084
Errors in `Train with Datasets` Tensorflow code section on Huggingface.co
closed
1
2022-04-01T17:02:47
2022-04-04T07:24:37
2022-04-04T07:21:31
blackhat-coder
[ "bug" ]
## Describe the bug Hi ### Error 1 Running the Tensforlow code on [Huggingface](https://huggingface.co/docs/datasets/use_dataset) gives a TypeError: __init__() got an unexpected keyword argument 'return_tensors' ### Error 2 `DataCollatorWithPadding` isn't imported ## Steps to reproduce the bug ```python import tensorflow as tf from datasets import load_dataset from transformers import AutoTokenizer dataset = load_dataset('glue', 'mrpc', split='train') tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') dataset = dataset.map(lambda e: tokenizer(e['sentence1'], truncation=True, padding='max_length'), batched=True) data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") train_dataset = dataset["train"].to_tf_dataset( columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'], shuffle=True, batch_size=16, collate_fn=data_collator, ) ``` This is the same code on Huggingface.co ## Actual results TypeError: __init__() got an unexpected keyword argument 'return_tensors' ## Environment info - `datasets` version: 2.0.0 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.9.7 - PyArrow version: 6.0.0 - Pandas version: 1.4.1 >
false
1,190,025,878
https://api.github.com/repos/huggingface/datasets/issues/4083
https://github.com/huggingface/datasets/pull/4083
4,083
Add SacreBLEU Metric Card
closed
1
2022-04-01T16:24:56
2022-04-12T20:45:00
2022-04-12T20:38:40
emibaylor
[]
null
true
1,189,965,845
https://api.github.com/repos/huggingface/datasets/issues/4082
https://github.com/huggingface/datasets/pull/4082
4,082
Add chrF(++) Metric Card
closed
1
2022-04-01T15:32:12
2022-04-12T20:43:55
2022-04-12T20:38:06
emibaylor
[]
null
true
1,189,916,472
https://api.github.com/repos/huggingface/datasets/issues/4081
https://github.com/huggingface/datasets/pull/4081
4,081
Close parquet writer properly in `push_to_hub`
closed
2
2022-04-01T14:58:50
2022-07-14T19:22:06
2022-04-01T16:16:19
lhoestq
[]
We don’t call writer.close(), which causes https://github.com/huggingface/datasets/issues/4077. It can happen that we upload the file before the writer is garbage collected and writes the footer. I fixed this by explicitly closing the parquet writer. Close https://github.com/huggingface/datasets/issues/4077.
true
1,189,667,296
https://api.github.com/repos/huggingface/datasets/issues/4080
https://github.com/huggingface/datasets/issues/4080
4,080
NonMatchingChecksumError for downloading conll2012_ontonotesv5 dataset
closed
1
2022-04-01T11:34:28
2022-04-01T13:59:10
2022-04-01T13:59:10
richarddwang
[ "duplicate", "dataset bug" ]
## Steps to reproduce the bug ```python datasets.load_dataset("conll2012_ontonotesv5", "english_v12") ``` ## Actual results ``` Downloading builder script: 32.2kB [00:00, 9.72MB/s] Downloading metadata: 20.0kB [00:00, 10.4MB/s] Downloading and preparing dataset conll2012_ontonotesv5/english_v12 (download: 174.83 MiB, generated: 204.29 MiB, post-processed: Unknown size , total: 379.12 MiB) to ... Traceback (most recent call last): [315/390] File "/home/yisiang/lgtn/conll2012/run.py", line 86, in <module> train() File "/home/yisiang/lgtn/conll2012/run.py", line 65, in train trainer.fit(model, datamodule=dm) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 740, in fit self._call_and_handle_interrupt( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_inte rrupt return trainer_fn(*args, **kwargs) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1131, in _run self._data_connector.prepare_data() File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/data_connector.py", line 154, in pre pare_data self.trainer.datamodule.prepare_data() File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/core/datamodule.py", line 474, in wrapped_fn fn(*args, **kwargs) File "/home/yisiang/lgtn/_abstract_task/data.py", line 43, in prepare_data raw_dsets = datasets.load_dataset(**load_dataset_kwargs) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/load.py", line 1687, in load_dataset builder_instance.download_and_prepare( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 605, in download_and_prepare self._download_and_prepare( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 1104, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 676, in _download_and_prepare verify_checksums( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0
false
1,189,521,576
https://api.github.com/repos/huggingface/datasets/issues/4079
https://github.com/huggingface/datasets/pull/4079
4,079
Increase max retries for GitHub datasets
closed
1
2022-04-01T09:34:03
2022-04-01T15:32:40
2022-04-01T15:27:11
albertvillanova
[]
As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub datasets, as previously done for GitHub metrics: - #4063 Note that this is a temporary solution, while we decide when and how to load GitHub datasets from the Hub: - #4059 Fix #2048 Related to: - #4051 - #3210 - #2787 - #2075 - #2036 CC: @lhoestq
true
1,189,513,572
https://api.github.com/repos/huggingface/datasets/issues/4078
https://github.com/huggingface/datasets/pull/4078
4,078
Fix GithubMetricModuleFactory instantiation with None download_config
closed
1
2022-04-01T09:26:58
2022-04-01T14:44:51
2022-04-01T14:39:27
albertvillanova
[]
Recent PR: - #4063 introduced a potential bug if `GithubMetricModuleFactory` is instantiated with None `download_config`. This PR add instantiation tests and fix that potential issue. CC: @lhoestq
true
1,189,467,585
https://api.github.com/repos/huggingface/datasets/issues/4077
https://github.com/huggingface/datasets/issues/4077
4,077
ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
closed
0
2022-04-01T08:49:13
2022-04-01T16:16:19
2022-04-01T16:16:19
NielsRogge
[ "bug" ]
## Describe the bug When uploading a relatively large image dataset of > 1GB, reloading doesn't work for me, even though pushing to the hub went just fine. Basically, I do: ``` from datasets import load_dataset dataset = load_dataset("imagefolder", data_files="path_to_my_files") dataset.push_to_hub("dataset_name") # works fine, no errors reloaded_dataset = load_dataset("dataset_name") ``` and it returns: ``` /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. ``` I created a Colab notebook to reproduce my error: https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing
false
1,188,478,867
https://api.github.com/repos/huggingface/datasets/issues/4076
https://github.com/huggingface/datasets/pull/4076
4,076
Add ROUGE Metric Card
closed
1
2022-03-31T18:34:34
2022-04-12T20:43:45
2022-04-12T20:37:38
emibaylor
[]
Add ROUGE metric card. I've left the 'Values from popular papers' section empty for the time being because I don't know the summarization literature very well and am therefore not sure which paper(s) to pull from (note that the original rouge paper does not seem to present specific values, just correlations with human judgements). Any suggestions on which paper(s) to pull from would be helpful! :)
true
1,188,462,162
https://api.github.com/repos/huggingface/datasets/issues/4075
https://github.com/huggingface/datasets/issues/4075
4,075
Add CCAgT dataset
closed
4
2022-03-31T18:20:28
2022-07-06T19:03:42
2022-07-06T19:03:42
johnnv1
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** CCAgT dataset: Images of Cervical Cells with AgNOR Stain Technique - **Description:** The dataset contains 2540 images (1600x1200 where each pixel is 0.111μm×0.111μm) from three different slides, having at least one nucleus per image. These images are from fields belonging to a sample cervical slide, colored with silver-stained, a method known as Argyrophilic Nucleolar Organizer Regions (AgNOR). - **Paper:** https://doi.org/10.1109/cbms49503.2020.00110 - **Data:** https://arquivos.ufsc.br/d/373be2177a33426a9e6c/ or https://drive.google.com/drive/u/4/folders/1TBpYCv6S1ydASLauSzcsvO7Wc5O-WUw0 - **Motivation:** This is a unique dataset (because of the stain), for a major health problem, cervical cancer, with real data. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Hi, this is a public version of the dataset that I have been working on, soon we will have another version of this dataset. But until this new version goes out, I thought I would add this dataset here, if it makes sense for the repository. You can assign the task to me if possible
false
1,188,449,142
https://api.github.com/repos/huggingface/datasets/issues/4074
https://github.com/huggingface/datasets/issues/4074
4,074
Error in google/xtreme_s dataset card
closed
1
2022-03-31T18:07:45
2022-04-01T08:12:56
2022-04-01T08:12:56
wranai
[ "documentation", "dataset bug" ]
**Link:** https://huggingface.co/datasets/google/xtreme_s Not a big deal but Hungarian is considered an Eastern European language, together with Serbian, Slovak, Slovenian (all correctly categorized; Slovenia is mostly to the West of Hungary, by the way).
false
1,188,364,711
https://api.github.com/repos/huggingface/datasets/issues/4073
https://github.com/huggingface/datasets/pull/4073
4,073
Create a metric card for Competition MATH
closed
1
2022-03-31T16:48:59
2022-04-01T19:02:39
2022-04-01T18:57:13
sashavor
[]
Proposing metric card for Competition MATH
true
1,188,266,410
https://api.github.com/repos/huggingface/datasets/issues/4072
https://github.com/huggingface/datasets/pull/4072
4,072
Add installation instructions to image_process doc
closed
1
2022-03-31T15:29:37
2022-03-31T17:05:46
2022-03-31T17:00:19
mariosasko
[]
This PR adds the installation instructions for the Image feature to the image process doc.
true
1,187,587,683
https://api.github.com/repos/huggingface/datasets/issues/4071
https://github.com/huggingface/datasets/issues/4071
4,071
Loading issue for xuyeliu/notebookCDG dataset
closed
1
2022-03-31T06:36:29
2022-03-31T08:17:01
2022-03-31T08:16:16
Jun-jie-Huang
[ "dataset bug" ]
## Dataset viewer issue for '*xuyeliu/notebookCDG*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/xuyeliu/notebookCDG)* *Couldn't load the xuyeliu/notebookCDG with provided scripts: * ``` from datasets import load_dataset dataset = load_dataset("xuyeliu/notebookCDG/dataset_notebook.pkl") ``` I get an error message as follows: FileNotFoundError: Couldn't find a dataset script at /home/code_documentation/code/xuyeliu/notebookCDG/notebookCDG.py or any data file in the same directory. Couldn't find 'xuyeliu/notebookCDG' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**train*'] in dataset repository xuyeliu/notebookCDG with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] Am I the one who added this dataset ? No
false
1,186,810,205
https://api.github.com/repos/huggingface/datasets/issues/4070
https://github.com/huggingface/datasets/pull/4070
4,070
Create metric card for seqeval
closed
1
2022-03-30T18:08:01
2022-04-01T19:02:58
2022-04-01T18:57:25
sashavor
[]
Proposing metric card for seqeval. Not sure which values to report for Popular papers though.
true
1,186,790,578
https://api.github.com/repos/huggingface/datasets/issues/4069
https://github.com/huggingface/datasets/pull/4069
4,069
Add support for metadata files to `imagefolder`
closed
7
2022-03-30T17:47:51
2022-05-03T12:49:00
2022-05-03T12:42:16
mariosasko
[]
This PR adds support for metadata files to `imagefolder` to add an ability to specify image fields other than `image` and `label`, which are inferred from the directory structure in the loaded dataset. To be parsed as an image metadata file, a file should be named `"info.csv"` and should have the following structure: ``` image_id,some_col1_name,some_col2_name rel/path/to/image1.jpg,image1_col1_value,image1_col2_value rel/path/to/image2.jpg,image2_col1_value,image2_col2_value ... ``` This is how the resolution works: ``` - path/to/imagefolder/directory - info.csv - 10.jpg # referenced as 10.jpg in "info.csv" - Cat - 0.jpg # referenced as Cat/0.jpg in "info.csv" - 1.jpg # referenced as Cat/1.jpg in "info.csv" - Dog - 0.jpg # referenced as Dog/0.jpg in "info.csv" - 1.jpg # referenced as Dog/1.jpg in "info.csv" ``` Open questions: 1. IMO it makes more sense to store image metadata as JSON Lines than CSV. CSV is sufficient for textual metadata but not the best for representing bounding boxes, for instance. Also, JSON Lines is more strict, which is good in this case (CSV supports various delimiters, the header line is optional, etc., so it's easier to enforce rules on JSON Lines that it's on CSV) 2. A better name for the `image_id` column, which contains image identifiers? Maybe `image_file` or `image_filename`? 3. WDYT about making `with_metadata=True` the default behavior if the loaded repo/directory contains an `info.csv` file? An example repository: https://huggingface.co/datasets/mariosasko/PetImages. Can be loaded by installing `datasets` from the PR branch and running `load_dataset("mariosasko/PetImages", with_metadata=True)`. cc: @abhishekkrthakur (this PR should address https://huggingface.slack.com/archives/C02JB9L6JKF/p1645450017434029?thread_ts=1645157416.389499&cid=C02JB9L6JKF) TODOs: - [x] Test - [x] Metadata file nesting ``` - path/to/imagefolder/directory - info.csv - 10.jpg - Cat - info.csv # should have higher precedence in this directory than the top-level info.csv, but we choose the first "eligible" metadata file currently - 0.jpg - 1.jpg ```
true
1,186,765,422
https://api.github.com/repos/huggingface/datasets/issues/4068
https://github.com/huggingface/datasets/pull/4068
4,068
Improve out of bounds error message
closed
1
2022-03-30T17:22:10
2022-03-31T08:39:08
2022-03-31T08:33:57
lhoestq
[]
In 1.18.4 with https://github.com/huggingface/datasets/pull/3719 we introduced an error message for users using `select` with out of bounds indices. The message ended up being confusing for some users because it mentioned negative indices, which is not the main use case. I replaced it with a message that is very similar to the one you get with you try to access a list with an out-of-range index.
true
1,186,731,905
https://api.github.com/repos/huggingface/datasets/issues/4067
https://github.com/huggingface/datasets/pull/4067
4,067
Update datasets task tags to align tags with models
closed
3
2022-03-30T16:49:32
2022-04-13T17:37:27
2022-04-13T17:31:11
lhoestq
[]
**Requires https://github.com/huggingface/datasets/pull/4066 to be merged first** Following https://github.com/huggingface/datasets/pull/4066 we need to update many dataset tags to use the new ones. This PR takes case of this and is quite big - feel free to review only certain tags if you don't want to spend too much time on it. Note that the CI will never be green for this PR, because many dataset cards have missing tags or sections, and fixing them is out of scope of this PR (the CI on master will be green anyway)
true
1,186,728,104
https://api.github.com/repos/huggingface/datasets/issues/4066
https://github.com/huggingface/datasets/pull/4066
4,066
Tasks alignment with models
closed
8
2022-03-30T16:45:56
2022-04-13T13:12:52
2022-04-08T12:20:00
lhoestq
[]
I updated our `tasks.json` file with the new task taxonomy that is aligned with models. The rule that defines a task is the following: **Two tasks are different if and only if the steps of their pipelines** are different, i.e. if they can’t reasonably be implemented using the same coherent code (level of granularity/complexity of the code to be defined - ideally I’d like to say “HF user’s level”) - this is the same definition in `transformers` I will update the tags of all the datasets in this repository [in another PR](https://github.com/huggingface/datasets/pull/4067) for readability. Main changes: - conditional-text-generation is split between summarization, translation, text-generation and text2text-generation - speech-processing is split into automatic-speech-recognition, audio-classification, etc. - structure-prediction is renamed token-classification - abstractive-qa now belongs to text2text-generation Here is just a simplified YAML dump of `tasks.json`: ```yaml audio-classification: - keyword-spotting - speaker-identification - speaker-intent-classification - emotion-recognition - speaker-language-identification audio-to-audio: [] automatic-speech-recognition: [] conversational: - dialogue-generation feature-extraction: [] fill-mask: - slot-filling - masked-language-modeling image-classification: - multi-label-image-classification - multi-class-image-classification image-segmentation: - instance-segmentation - semantic-segmentation - panoptic-segmentation image-to-text: - image-captioning multiple-choice: - multiple-choice-qa - multiple-choice-coreference-resolution object-detection: - face-detection - vehicle-detection question-answering: - extractive-qa - open-domain-qa - closed-domain-qa sentence-similarity: [] tabular-classification: [] tabular-to-text: - rdf-to-text summarization: - news-articles-summarization - news-articles-headline-generation table-to-text: [] table-question-answering: [] text-classification: - acceptability-classification - entity-linking-classification - fact-checking - intent-classification - multi-class-classification - multi-label-classification - natural-language-inference - semantic-similarity-classification - sentiment-classification - topic-classification - semantic-similarity-scoring - sentiment-scoring - sentiment-analysis - hate-speech-detection - text-scoring text-generation: - dialogue-modeling - language-modeling text-retrieval: - document-retrieval - utterance-retrieval - entity-linking-retrieval - fact-checking-retrieval text-to-image: [] text-to-tabular: - relation-extraction - semantic-role-labeling text-to-speech: [] text2text-generation: - text-simplification - explanation-generation - abstractive-qa - open-domain-abstractive-qa - closed-domain-qa - open-book-qa - closed-book-qa time-series-forecasting: - univariate-time-series-forecasting - multivariate-time-series-forecasting token-classification: - named-entity-recognition - part-of-speech-tagging - parsing - lemmatization - word-sense-disambiguation - coreference-resolution translation: [] visual-question-answering: [] voice-activity-detection: [] zero-shot-classification: [] zero-shot-image-classification: [] reinforcement-learning: [] other: [] ``` Feel free to comment and give suggestions, especially if you think we can also align this list with other projects cc @julien-c @osanseviero @severo @lewtun @yjernite @albertvillanova @mariosasko @polinaeterna
true
1,186,722,478
https://api.github.com/repos/huggingface/datasets/issues/4065
https://github.com/huggingface/datasets/pull/4065
4,065
Create metric card for METEOR
closed
1
2022-03-30T16:40:30
2022-03-31T17:12:10
2022-03-31T17:07:50
sashavor
[]
Proposing a metric card for METEOR
true
1,186,650,321
https://api.github.com/repos/huggingface/datasets/issues/4064
https://github.com/huggingface/datasets/pull/4064
4,064
Contributing MedMCQA dataset
closed
15
2022-03-30T15:42:47
2022-05-06T09:40:40
2022-05-06T08:42:56
monk1337
[]
Adding MedMCQA dataset ( https://paperswithcode.com/dataset/medmcqa ) **Name**: MedMCQA **Description**: MedMCQA is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions. MedMCQA has more than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity. The dataset contains questions about the following topics: Anesthesia, Anatomy, Biochemistry, Dental, ENT, Forensic Medicine (FM), Obstetrics and Gynecology (O&G), Medicine, Microbiology, Ophthalmology, Orthopedics Pathology, Pediatrics, Pharmacology, Physiology, Psychiatry, Radiology Skin, Preventive & Social Medicine (PSM), and Surgery **Code**: https://github.com/medmcqa/medmcqa All files are at place : **a dataset script** : medmcqa.py **a dataset card with tags and information** : README.md. **a metadata file** : dataset_infos.json **a dummy-data file** : Please help to generate this file, I was facing ` raise JSONDecodeError("Extra data", s, end)` error
true
1,186,611,368
https://api.github.com/repos/huggingface/datasets/issues/4063
https://github.com/huggingface/datasets/pull/4063
4,063
Increase max retries for GitHub metrics
closed
1
2022-03-30T15:12:48
2022-03-31T14:42:52
2022-03-31T14:37:47
albertvillanova
[]
As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub metrics. Related to: - #3134 Also related to: - #4059
true
1,186,330,732
https://api.github.com/repos/huggingface/datasets/issues/4062
https://github.com/huggingface/datasets/issues/4062
4,062
Loading mozilla-foundation/common_voice_7_0 dataset failed
closed
10
2022-03-30T11:39:41
2024-06-09T12:12:46
2022-03-31T08:18:04
aapot
[ "dataset bug" ]
## Describe the bug I wanted to load `mozilla-foundation/common_voice_7_0` dataset with `fi` language and `test` split from datasets on Colab/Kaggle notebook, but I am getting an error `JSONDecodeError: [Errno Expecting value] Not Found: 0` while loading it. The bug seems to affect other languages and splits too than just the `fi` and `test` split. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("mozilla-foundation/common_voice_7_0", "fi", split="test", use_auth_token="YOUR TOKEN") ``` ## Expected results load `mozilla-foundation/common_voice_7_0` dataset succesfully ## Actual results ``` JSONDecodeError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/requests/models.py in json(self, **kwargs) 909 try: --> 910 return complexjson.loads(self.text, **kwargs) 911 except JSONDecodeError as e: /opt/conda/lib/python3.7/site-packages/simplejson/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, use_decimal, **kw) 524 and not use_decimal and not kw): --> 525 return _default_decoder.decode(s) 526 if cls is None: /opt/conda/lib/python3.7/site-packages/simplejson/decoder.py in decode(self, s, _w, _PY3) 369 s = str(s, self.encoding) --> 370 obj, end = self.raw_decode(s) 371 end = _w(s, end).end() /opt/conda/lib/python3.7/site-packages/simplejson/decoder.py in raw_decode(self, s, idx, _w, _PY3) 399 idx += 3 --> 400 return self.scan_once(s, idx=_w(s, idx).end()) JSONDecodeError: Expecting value: line 1 column 1 (char 0) During handling of the above exception, another exception occurred: JSONDecodeError Traceback (most recent call last) /tmp/ipykernel_358/370980805.py in <module> 1 # load Common Voice 7.0 dataset from Huggingface with Finnish "test" split ----> 2 test_dataset = load_dataset("mozilla-foundation/common_voice_7_0", "fi", split="test", use_auth_token=True) /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1690 ignore_verifications=ignore_verifications, 1691 try_from_hf_gcs=try_from_hf_gcs, -> 1692 use_auth_token=use_auth_token, 1693 ) 1694 /opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 604 if not downloaded_from_gcs: 605 self._download_and_prepare( --> 606 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 607 ) 608 # Sync info /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos) 1102 1103 def _download_and_prepare(self, dl_manager, verify_infos): -> 1104 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) 1105 1106 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable: /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 670 split_dict = SplitDict(dataset_name=self.name) 671 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 672 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 673 674 # Checksums verification ~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_7_0/fe20cac47c166e25b1f096ab661832e3da7cf298ed4a91dcaa1343ad972d175b/common_voice_7_0.py in _split_generators(self, dl_manager) 151 152 self._log_download(self.config.name, bundle_version, hf_auth_token) --> 153 archive = dl_manager.download(self._get_bundle_url(self.config.name, bundle_url_template)) 154 155 if self.config.version < datasets.Version("5.0.0"): ~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_7_0/fe20cac47c166e25b1f096ab661832e3da7cf298ed4a91dcaa1343ad972d175b/common_voice_7_0.py in _get_bundle_url(self, locale, url_template) 130 path = urllib.parse.quote(path.encode("utf-8"), safe="~()*!.'") 131 use_cdn = self.config.size_bytes < 20 * 1024 * 1024 * 1024 --> 132 response = requests.get(f"{_API_URL}/bucket/dataset/{path}/{use_cdn}", timeout=10.0).json() 133 return response["url"] 134 /opt/conda/lib/python3.7/site-packages/requests/models.py in json(self, **kwargs) 915 raise RequestsJSONDecodeError(e.message) 916 else: --> 917 raise RequestsJSONDecodeError(e.msg, e.doc, e.pos) 918 919 @property JSONDecodeError: [Errno Expecting value] Not Found: 0 ``` ## Environment info - `datasets` version: 2.0.0 - Platform: Linux-5.10.90+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyArrow version: 5.0.0 - Pandas version: 1.3.5
false
1,186,317,071
https://api.github.com/repos/huggingface/datasets/issues/4061
https://github.com/huggingface/datasets/issues/4061
4,061
Loading cnn_dailymail dataset failed
closed
1
2022-03-30T11:29:02
2022-03-30T13:36:14
2022-03-30T13:36:14
Arij-Aladel
[ "bug", "duplicate" ]
## Describe the bug I wanted to load cnn_dailymail dataset from huggingface datasets on jupyter lab, but I am getting an error ` NotADirectoryError:[Errno20] Not a directory ` while loading it. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` ## Expected results load `cnn_dailymail` dataset succesfully ## Actual results failed to load and get error > NotADirectoryError: [Errno 20] Not a directory ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` 1.8.0: - Platform: Ubuntu-20.04 - Python version: 3.9.10 - PyArrow version: 3.0.0
false
1,186,281,033
https://api.github.com/repos/huggingface/datasets/issues/4060
https://github.com/huggingface/datasets/pull/4060
4,060
Deprecate canonical Multilingual Librispeech
closed
7
2022-03-30T10:56:56
2022-04-01T12:54:05
2022-04-01T12:48:51
polinaeterna
[]
Deprecate canonical Multilingual Librispeech in favor of [the community one](https://huggingface.co/datasets/facebook/multilingual_librispeech) which supports streaming. However, there is a problem regarding new ASR template schema: since it's changed, I guess all community datasets that use this template do not work with new version of the library, including MLS. Should we somehow notify users about that or is it possible to change this line ourselves? For MLS specifically, I cannot change the code directly as I'm not the member of the Facebook org. Hm, and the code should be change after the release, no?
true
1,186,149,949
https://api.github.com/repos/huggingface/datasets/issues/4059
https://github.com/huggingface/datasets/pull/4059
4,059
Load GitHub datasets from Hub
closed
10
2022-03-30T09:21:56
2022-09-16T12:43:26
2022-09-16T12:40:43
albertvillanova
[]
We have recurrently had connection errors when requesting GitHub because sometimes the site is not available. This PR requests the Hub instead, once all GitHub datasets are mirrored on the Hub. Fix #2048 Related to: - #4051 - #3210 - #2787 - #2075 - #2036
true
1,185,611,600
https://api.github.com/repos/huggingface/datasets/issues/4058
https://github.com/huggingface/datasets/pull/4058
4,058
Updated annotations for nli_tr dataset
closed
2
2022-03-29T23:46:59
2022-04-12T20:55:12
2022-04-12T10:37:22
e-budur
[]
This PR adds annotation tags for `nli_tr` dataset so that the dataset can be searchable wrt. relevant query parameters. The annotations in this PR are based on the existing annotations of `snli` and `multi_nli` datasets as `nli_tr` is a machine-generated extension of those datasets. This PR is intended only for updating the annotation labels but a followup PR will focus on updating the missing sections in the `README.md` as well. Thanks for all your time to review it.
true
1,185,442,001
https://api.github.com/repos/huggingface/datasets/issues/4057
https://github.com/huggingface/datasets/issues/4057
4,057
`load_dataset` consumes too much memory for audio + tar archives
closed
18
2022-03-29T21:38:55
2022-08-16T10:22:55
2022-08-16T10:22:55
JFCeron
[ "bug" ]
## Description `load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the discussion in #741 but the problem persists. ## Steps to reproduce the bug Here's my implementation of `_generate_examples`: ```python class MyDatasetBuilder(datasets.GeneratorBasedBuilder): DEFAULT_WRITER_BATCH_SIZE = 1 ... def _split_generators(self, dl_manager): archive_path = dl_manager.download(_DL_URLS[self.config.name]) return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={ "audio_tarfile_path": archive_path["audio_tarfile"] }, ), ] def _generate_examples(self, audio_tarfile_path): key = 0 with tarfile.open(audio_tarfile_path, mode="r|") as audio_tarfile: for audio_tarinfo in audio_tarfile: audio_name = audio_tarinfo.name audio_file_obj = audio_tarfile.extractfile(audio_tarinfo) yield key, {"audio": {"path": audio_name, "bytes": audio_file_obj.read()}} key += 1 ``` I then try to load via `ds = load_dataset('./datasets/my_new_dataset', writer_batch_size=1)`, and memory usage grows until all 8GB of my machine are taken and process is killed (`Killed`). Also tried an untarred version of this using `os.walk` but the same happened. I created a script to confirm that one can safely go through such a generator, which runs just fine with memory <500MB at all times. ```python import tarfile def generate_examples(): audio_tarfile = tarfile.open("audios.tar", mode="r|") key = 0 for audio_tarinfo in audio_tarfile: audio_name = audio_tarinfo.name audio_file_obj = audio_tarfile.extractfile(audio_tarinfo) yield key, {"audio": {"path": audio_name, "bytes": audio_file_obj.read()}} key += 1 if __name__ == "__main__": examples = generate_examples() for example in examples: pass ``` ## Expected results Memory consumption should be similar to the non-huggingface script. ## Actual results Process is killed after consuming too much memory. ## Environment info - `datasets` version: 2.0.1.dev0 - Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-debian-10.12 - Python version: 3.7.12 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
false
1,185,155,775
https://api.github.com/repos/huggingface/datasets/issues/4056
https://github.com/huggingface/datasets/issues/4056
4,056
Unexpected behavior of _TempDirWithCustomCleanup
open
2
2022-03-29T16:58:22
2022-03-30T15:08:04
null
JonasGeiping
[ "bug" ]
## Describe the bug This is not 100% a bug in `datasets`, but behavior that surprised me and I think this could be made more robust on the `datasets`side. When using `datasets.disable_caching()`, cache files are written to a temporary directory. This directory should be based on the environment variable TMPDIR. I want to set TMPDIR at runtime using os.ENVIRON["TMPDIR"] = something, but depending on other imported modules this can fail to take effect. ## Steps to reproduce the bug `_TempDirWithCustomCleanup` relies on `tempfile` to generate a path to a temporary directory. However, `tempfile` generates the path only once. This can be a problem when trying to set TMPDIR at runtime whenever other code imports `tempfile` first and does something unexpected. For example (after too much trial and error) I found out that a different part of the code base I work with defines a class `PatchedDataCollatorForLanguageModeling(transformers.DataCollatorForLanguageModeling)` based on a `transformers` class. This import is enough to trigger `tempfile` to generate `tempfile` to generate a temporary path and leading to the wrong path being cached in `tempfile.tempdir`. ## Suggestion: I could file this also as bug with `transformers`, but I think fixing this on the datasets would be much more robust: Datasets could recompute the temporary path once (technically possible via `tempfile._get_default_tempdir` or resetting the global variable `tempfile.tmpdir` to None) before setting its own global `_TEMP_DIR_FOR_TEMP_CACHE_FILES`.
false
1,184,976,292
https://api.github.com/repos/huggingface/datasets/issues/4055
https://github.com/huggingface/datasets/pull/4055
4,055
[DO NOT MERGE] Test doc-builder
closed
2
2022-03-29T14:39:02
2022-03-30T12:31:14
2022-03-30T12:25:52
lewtun
[]
This is a test PR to ensure the changes in https://github.com/huggingface/doc-builder/pull/164 don't break anything in `datasets`
true
1,184,575,368
https://api.github.com/repos/huggingface/datasets/issues/4054
https://github.com/huggingface/datasets/pull/4054
4,054
Support float data types in pearsonr/spearmanr metrics
closed
1
2022-03-29T09:29:10
2022-03-29T14:07:59
2022-03-29T14:02:20
albertvillanova
[]
Fix #4053.
true
1,184,500,378
https://api.github.com/repos/huggingface/datasets/issues/4053
https://github.com/huggingface/datasets/issues/4053
4,053
Modify datatype from `int32` to `float` for pearsonr, spearmanr.
closed
1
2022-03-29T08:27:41
2022-03-29T14:02:20
2022-03-29T14:02:20
woodywarhol9
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** - Now [Pearsonr](https://github.com/huggingface/datasets/blob/master/metrics/pearsonr/pearsonr.py) and [Spearmanr](https://github.com/huggingface/datasets/blob/master/metrics/spearmanr/spearmanr.py) both get input data as 'int32'. **Describe the solution you'd like** - Considering that those metrics are widely used for the STS task(labels are in 'float' data type), it would be better to modify datatype from 'int32' to 'float' for getting exact values of similarity.
false
1,184,447,977
https://api.github.com/repos/huggingface/datasets/issues/4052
https://github.com/huggingface/datasets/issues/4052
4,052
metric = metric_cls( TypeError: 'NoneType' object is not callable
closed
1
2022-03-29T07:43:08
2022-03-29T14:06:01
2022-03-29T14:06:01
klyuhang9
[]
Hi, friend. I meet a problem. When I run the code: `metric = load_metric('glue', 'rte')` There is a problem raising: `metric = metric_cls( TypeError: 'NoneType' object is not callable ` I don't know why. Thanks for your help!
false
1,184,400,179
https://api.github.com/repos/huggingface/datasets/issues/4051
https://github.com/huggingface/datasets/issues/4051
4,051
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
closed
5
2022-03-29T07:00:31
2022-05-08T07:27:32
2022-03-29T08:29:25
klyuhang9
[]
Hi, I meet a problem. When I run the code: `dataset = load_dataset('glue','sst2')` There is a issue raising: ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py I don't know why; it is ok when I use Google Chrome to view this url. Thanks for your help!
false
1,184,346,501
https://api.github.com/repos/huggingface/datasets/issues/4050
https://github.com/huggingface/datasets/pull/4050
4,050
Add RVL-CDIP dataset
closed
14
2022-03-29T06:00:02
2022-04-22T09:55:07
2022-04-21T17:15:41
dnaveenr
[]
Resolves #2762 Dataset Request : Add RVL-CDIP dataset [#2762](https://github.com/huggingface/datasets/issues/2762) This PR adds the RVL-CDIP dataset. The dataset contains Google Drive link for download and wasn't getting downloaded automatically, so I have provided manual_download_instructions. - I have added the dummy_data.zip as well. Needed inputs on how I can run the real data and the dummy data tests for datasets with manual download ? Inputs and suggestions for improvement are welcome. Thank you.
true
1,183,832,893
https://api.github.com/repos/huggingface/datasets/issues/4049
https://github.com/huggingface/datasets/pull/4049
4,049
Create metric card for the Code Eval metric
closed
3
2022-03-28T18:34:23
2022-03-29T13:38:12
2022-03-29T13:32:50
sashavor
[]
Creating initial Code Eval metric card
true
1,183,804,576
https://api.github.com/repos/huggingface/datasets/issues/4048
https://github.com/huggingface/datasets/issues/4048
4,048
Split size error on `amazon_us_reviews` / `PC_v1_00` dataset
closed
3
2022-03-28T18:12:04
2022-04-08T12:29:30
2022-04-08T12:29:30
trentonstrong
[ "bug", "good first issue" ]
## Describe the bug When downloading this subset as of 3-28-2022 you will encounter a split size error after the dataset is extracted. The extracted dataset has roughly ~6m rows while the split expects <1m. Upon digging a little deeper, I downloaded the raw files from `https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_PC_v1_00.tsv.gz` and extracted them. A line count via `wc -l` confirms the ~6m number that we see and the data looks valid at a glance (I did not check for duplicate rows). My guess is this file has either been updated in place or there is a bug in the dataset metadata. Happy to submit a PR and fix this up if turns out to be a metadata issue but wanted to get some other :eyes: on it first. ## Steps to reproduce the bug ```python load_dataset('amazon_us_reviews', 'PC_v1_00') ``` ## Expected results Dataset is downloaded and extracted successfully. ## Actual results An split size exception is thrown. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
false
1,183,789,237
https://api.github.com/repos/huggingface/datasets/issues/4047
https://github.com/huggingface/datasets/issues/4047
4,047
Dataset.unique(column: str) -> ArrowNotImplementedError
closed
3
2022-03-28T17:59:32
2022-04-01T18:24:57
2022-04-01T18:24:57
orkenstein
[ "bug" ]
## Describe the bug I'm trying to use `unique()` function, but it fails ## Steps to reproduce the bug 1. Get dataset 2. Call `unique` 3. Error # Sample code to reproduce the bug ```python !pip show datasets from datasets import load_dataset dataset = load_dataset('wikiann', 'en') dataset['train'].column_names dataset['train'].unique(dataset['train'].column_names[0]) ``` ## Expected results It would be nice to actually see unique items ## Actual results Error: ```python --------------------------------------------------------------------------- ArrowNotImplementedError Traceback (most recent call last) [<ipython-input-10-5e0de07ed42c>](https://s0qyv2vjaji-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220324-060046-RC00_436956229#) in <module>() 6 7 dataset['train'].column_names ----> 8 dataset['train'].unique(dataset['train'].column_names[0]) 5 frames /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowNotImplementedError: Function unique has no kernel matching input types (array[list<item: string>]) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Google Collab - Python version: 3.7.13 - PyArrow version: 6.0.1
false
1,183,723,360
https://api.github.com/repos/huggingface/datasets/issues/4046
https://github.com/huggingface/datasets/pull/4046
4,046
Create metric card for XNLI
closed
1
2022-03-28T16:57:58
2022-03-29T13:32:59
2022-03-29T13:27:30
sashavor
[]
Proposing a metric card for XNLI
true
1,183,661,091
https://api.github.com/repos/huggingface/datasets/issues/4045
https://github.com/huggingface/datasets/pull/4045
4,045
Fix CLI dummy data generation
closed
1
2022-03-28T16:09:15
2022-03-31T15:04:12
2022-03-31T14:59:06
albertvillanova
[]
PR: - #3868 broke the CLI dummy data generation. Fix #4044.
true
1,183,658,942
https://api.github.com/repos/huggingface/datasets/issues/4044
https://github.com/huggingface/datasets/issues/4044
4,044
CLI dummy data generation is broken
closed
0
2022-03-28T16:07:37
2022-03-31T14:59:06
2022-03-31T14:59:06
albertvillanova
[ "bug" ]
## Describe the bug We get a TypeError when running CLI dummy data generation: ```shell datasets-cli dummy_data datasets/<your-dataset-folder> --auto_generate ``` gives: ``` File ".../huggingface/datasets/src/datasets/commands/dummy_data.py", line 361, in _autogenerate_dummy_data dataset_builder._prepare_split(split_generator) TypeError: _prepare_split() missing 1 required positional argument: 'check_duplicate_keys' ```
false
1,183,624,475
https://api.github.com/repos/huggingface/datasets/issues/4043
https://github.com/huggingface/datasets/pull/4043
4,043
Create metric card for CUAD
closed
1
2022-03-28T15:38:58
2022-03-29T15:20:56
2022-03-29T15:15:19
sashavor
[]
Proposing a CUAD metric card
true
1,183,599,461
https://api.github.com/repos/huggingface/datasets/issues/4041
https://github.com/huggingface/datasets/issues/4041
4,041
Add support for IIIF in datasets
open
1
2022-03-28T15:19:25
2022-04-05T18:20:53
null
davanstrien
[ "enhancement" ]
This is a feature request for support for IIIF in `datasets`. Apologies for the long issue. I have also used a different format to the usual feature request since I think that makes more sense but happy to use the standard template if preferred. ## What is [IIIF](https://iiif.io/)? IIIF (International Image Interoperability Framework) > is a set of open standards for delivering high-quality, attributed digital objects online at scale. It’s also an international community developing and implementing the IIIF APIs. IIIF is backed by a consortium of leading cultural institutions. The tl;dr is that IIIF provides various specifications for implementing useful functionality for: - Institutions to make available images for various use cases - Users to have a consistent way of interacting/requesting these images - For developers to have a common standard for developing tools for working with IIIF images that will work across all institutions that implement a particular IIIF standard (for example the image viewer for the BNF can also work for the Library of Congress if they both use IIIF). Some institutions that various levels of support IIF include: The British Library, Internet Archive, Library of Congress, Wikidata. There are also many smaller institutions that have IIIF support. An incomplete list can be found here: https://iiif.io/guides/finding_resources/ ## IIIF APIs IIIF consists of a number of APIs which could be integrated with datasets. I think the most obvious candidate for inclusion would be the [Image API](https://iiif.io/api/image/3.0/) ### IIIF Image API The Image API https://iiif.io/api/image/3.0/ is likely the most suitable first candidate for integration with datasets. The Image API offers a consistent protocol for requesting images via a URL: ```{scheme}://{server}{/prefix}/{identifier}/{region}/{size}/{rotation}/{quality}.{format}``` A concrete example of this: ```https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg``` As you can see the scheme offers a number of options that can be specified in the URL, for example, size. Using the example URL we return: ![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg) We can change the size to request a size of 250 by 250, this is done by changing the size from `full` to `250,250` i.e. switching the URL to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg` ![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg) We can also request the image with max width 250, max height 250 whilst maintaining the aspect ratio using `!w,h`. i.e. change the url to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg` ![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg) A full overview of the options for size can be found here: https://iiif.io/api/image/3.0/#42-size ## Why would/could this be useful for datasets? There are a few reasons why support for the IIIF Image API could be useful. Broadly the ability to have more control over how an image is returned from a server is useful for many ML workflows: - images can be requested in the right size, this prevents having to download/stream large images when the actual desired size is much smaller - can select a subset of an image: it is possible to select a sub-region of an image, this could be useful for example when you already have a bounding box for a subset of an image and then want to use this subset of an image for another task. For example, https://github.com/Living-with-machines/nnanno uses IIIF to request parts of a newspaper image that have been detected as 'photograph', 'illustration' etc for downstream use. - options for quality, rotation, the format can all be encoded in the URL request. These may become particularly useful when pre-training models on large image datasets where the cost of downloading images with 1600 pixel width when you actually want 240 has a larger impact. ## What could this look like in datasets? I think there are various ways in which support for IIIF could potentially be included in `datasets`. These suggestions aren't fully fleshed out but hopefully, give a sense of possible approaches that match existing `datasets` methods in their approach. ### Use through datasets scripts Loading images via URL is already supported. There are a few possible 'extras' that could be included when using IIIF. One option is to leverage the IIIF protocol in datasets scripts, i.e. the dataset script can expose the IIIF options via the dataset script: ```python ds = load_dataset("iiif_dataset", image_size="250,250", fmt="jpg") ``` This is already possible. The approach to parsing the IIIF URLs would be left to the person creating the dataset script. ### Support through dataset scripts (with some datasets support) This is similar to the above but `datasets` would offer some way of saying this is a iiif URL and then expose the options associated with IIIF images automatically. i.e. if you did something like: ```python features = {"label": ClassLabel(names=['dog','cat']), "url": datasets.IIIFURL()} ``` inside your loading script, you would automatically have exposed `size`, `fmt` etc. options when loading the dataset. ### Other possible integrations Some other possible pseudocode ways that a user could interact with IIIF URLs: The ability to cast to an `IIIFImage` feature type: ``` ds.cast_column('url', IIIFImage, download=False) ``` The ability to specify some options associated with IIIF urls. ``` ds = ds.set_iiif_options(column='url', size="250,250") ``` I think all of these would rely on having an `IIIFImage` feature type - this would be a little bit of a Frankenstein between a `string` and `datasets.Image`. I think most of the actual image behaviour would be exactly the same as `datasets.Image`, the difference would be that the underlying URL could be modified in various ways. ## prerequisite requirements There are a few pre-requisites that I can anticipate. This doesn't cover a full implementation of IIIF support which would have different requirements depending on the approach taken to implementing IIIF. Some of these features would be useful independently of adding IIIF support: ### support for handling failed images loaded via a URL (or a specific IIIFImage feature). Working with images via web requests will inevitably return the odd failed request. If these images are then requests and don't return it would be useful to have a `None` returned instead of an error. For example, when using `push_to_hub` `datasets` will try and include the image but currently fails with bad URLs. ```python from datasets import Dataset import datasets urls = ['https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg']*3 urls.append("badurl.com/image.jpg") data = {"url":urls} ds = Dataset.from_dict(data) ds = ds.cast_column('url', datasets.Image()) ds[3]['url'] ``` returns a `FileNotFoundError`, for streaming large datasets of images using their URLs it could be useful to have `None` returned instead. This has implications for the actual training loop i.e. you now need to somehow skip those examples because of this it might not be desirable to support this. ### Caching support Since IIIF requests images via a URL it would be great to have a way of not requesting the images multiple times. This is tracked in https://github.com/huggingface/datasets/issues/3142 and I think this would also be very desirable to have here particularly as one of the primary use cases of IIIF may be to do unsupervised pre-training on large datasets of IIIF URLs. ### Support for Parsing IIIF URLs This gets closer to the actual implementation. Here the requirement would be some way for `datasets` to parse a URL that the users specify is an IIIF URL. An example of a Python library that does this: https://github.com/Princeton-CDH/piffle. I also have a rough version that uses `dataclasses` which I can share. ## Why it might not be worthwhile/suitable for datasets There are some reasons that this might not be worth implementing: - currently, IIIF is mainly used by cultural heritage organizations (museums, archives etc.) The adoption of IIIF in this sector has been growing but it's possible that adoption won't be extended to other industries which may also be a source of image data for training ML models. - It may end up being better to leave this to the user. It would for example be possible for someone to write map functions to change an IIIF URL to the correct size etc. Adding direct support for IIIF in datasets may potentially not be worth the trouble. - The impact of different approaches to doing image scaling can impact the downstream model's performance, see: https://twitter.com/wightmanr/status/1479528581466243073?s=20. Since different IIIF image servers may implement different approaches to resizing images this could have a downstream impact on model performance. think this is something that could be flagged to the end-user in the documentation. This probably also falls into general "gotchas" that probably aren't the `datasets` libraries' role to protect users from. Some of the requirements outlined above would be useful for images anyway. These could be implemented prior to a final decision about whether IIIF support could/should be added to datasets. ## Suggested next steps: I realise this is a long and slightly open-ended issue. I am happy to clarify/answer questions on IIIF and possible integrations. If the prerequisite requirements seem worth exploring/are better explored in their own issues let me know and I can open new issues for those.
false
1,183,468,927
https://api.github.com/repos/huggingface/datasets/issues/4039
https://github.com/huggingface/datasets/pull/4039
4,039
Support streaming xcopa dataset
closed
1
2022-03-28T13:45:55
2022-03-28T16:26:48
2022-03-28T16:21:46
albertvillanova
[]
null
true
1,183,189,827
https://api.github.com/repos/huggingface/datasets/issues/4038
https://github.com/huggingface/datasets/pull/4038
4,038
[DO NOT MERGE] Test doc-builder with skipped installation feature
closed
2
2022-03-28T09:58:31
2023-09-24T10:01:05
2022-03-28T12:29:09
lewtun
[]
This PR is just for testing that we can build PR docs with changes made on the [`skip-install-for-real`](https://github.com/huggingface/doc-builder/tree/skip-install-for-real) branch of `doc-builder`.
true
1,183,144,486
https://api.github.com/repos/huggingface/datasets/issues/4037
https://github.com/huggingface/datasets/issues/4037
4,037
Error while building documentation
closed
2
2022-03-28T09:22:44
2022-03-28T10:01:52
2022-03-28T10:00:48
albertvillanova
[ "bug" ]
## Describe the bug Documentation building is failing: - https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true ``` ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format. Unable to find datasets.filesystems.S3FileSystem in datasets. Make sure the path to that object is correct. ```
false
1,183,126,893
https://api.github.com/repos/huggingface/datasets/issues/4036
https://github.com/huggingface/datasets/pull/4036
4,036
Fix building of documentation
closed
2
2022-03-28T09:09:12
2023-09-24T09:55:34
2022-03-28T11:13:22
albertvillanova
[]
Documentation building is failing: - https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true ``` ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format. Unable to find datasets.filesystems.S3FileSystem in datasets. Make sure the path to that object is correct. ``` Fix #4037.
true
1,183,067,456
https://api.github.com/repos/huggingface/datasets/issues/4035
https://github.com/huggingface/datasets/pull/4035
4,035
Add zero_division argument to precision and recall metrics
closed
0
2022-03-28T08:19:14
2022-03-28T09:53:07
2022-03-28T09:53:06
albertvillanova
[]
Fix #4025.
true
1,183,033,285
https://api.github.com/repos/huggingface/datasets/issues/4034
https://github.com/huggingface/datasets/pull/4034
4,034
Fix null checksum in xcopa dataset
closed
0
2022-03-28T07:48:14
2022-03-28T08:06:14
2022-03-28T08:06:14
albertvillanova
[]
null
true
1,182,984,445
https://api.github.com/repos/huggingface/datasets/issues/4033
https://github.com/huggingface/datasets/pull/4033
4,033
Fix checksum error in cats_vs_dogs dataset
closed
1
2022-03-28T07:01:25
2022-03-28T07:49:39
2022-03-28T07:44:24
albertvillanova
[]
Recent PR updated the metadata JSON file of cats_vs_dogs dataset: - #3878 However, that new JSON file contains a None checksum. This PR fixes it. Fix #4032.
true
1,182,595,697
https://api.github.com/repos/huggingface/datasets/issues/4032
https://github.com/huggingface/datasets/issues/4032
4,032
can't download cats_vs_dogs dataset
closed
1
2022-03-27T17:05:39
2022-03-28T07:44:24
2022-03-28T07:44:24
RRaphaell
[ "bug" ]
## Describe the bug can't download cats_vs_dogs dataset. error: Checksums didn't match for dataset source files ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("cats_vs_dogs") ``` ## Expected results loaded successfully. ## Actual results NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip'] ## Environment info fresh google colab notebook
false
1,182,415,124
https://api.github.com/repos/huggingface/datasets/issues/4031
https://github.com/huggingface/datasets/issues/4031
4,031
Cannot load the dataset conll2012_ontonotesv5
closed
1
2022-03-27T07:38:23
2022-03-28T06:58:31
2022-03-28T06:31:18
cathyxl
[ "bug" ]
## Describe the bug Cannot load the dataset conll2012_ontonotesv5 ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset dataset = load_dataset('conll2012_ontonotesv5', 'english_v4', split="test") print(dataset) ``` ## Expected results The datasets should be downloaded successfully ## Actual results raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip'] ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux-5.4.0-88-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 7.0.0
false
1,182,157,056
https://api.github.com/repos/huggingface/datasets/issues/4030
https://github.com/huggingface/datasets/pull/4030
4,030
Use a constant for the articles regex in SQuAD v2
closed
1
2022-03-26T23:06:30
2022-04-12T16:30:45
2022-04-12T11:00:24
bryant1410
[]
The main reason for doing this is to be able to change the articles list if using another language, for example. It's not the most elegant solution but at least it makes the metric more extensible with no drawbacks. BTW, what could be the best way to make this more generic (i.e., SQuAD in other languages)? Maybe receive a regex as an optional param, with the current value as the default? Similarly for SQuAD v1 (can't they re-use code?).
true
1,181,057,011
https://api.github.com/repos/huggingface/datasets/issues/4029
https://github.com/huggingface/datasets/issues/4029
4,029
Add FAISS .range_search() method for retrieving all texts from dataset above similarity threshold
closed
4
2022-03-25T17:31:33
2022-05-06T08:35:52
2022-05-06T08:35:52
MoritzLaurer
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** I would like to retrieve all texts from a dataset, which are semantically similar to a specific input text (query), above a certain (cosine) similarity threshold. My dataset is very large (Wikipedia), so I need to use Datasets and FAISS for this. I would like to be able to repeat many different queries on the dataset quickly. **Describe the solution you'd like** dataset objects currently have the .get_nearest_examples() method for text retrieval via FAISS. But this only allows retrieving a specific number of K texts instead of everything above a specified similarity threshold. It would be great if HF Datasets would also support the FAISS method .range_search() for retrieving texts above a certain similarity threshold. see details here: https://github.com/facebookresearch/faiss/issues/1273 **Describe alternatives you've considered** I've considered using native FAISS, but doing this via HF datasets would be better. My assumption is that Dataset features like dataset streaming make it easier to work with large datasets **Additional context** The concrete use-case is: I have a large dataset (wikipedia) and I would like to retrieve all paragraphs which are similar to a query. I will use sentence-transformers for encoding the texts.
false
1,181,022,675
https://api.github.com/repos/huggingface/datasets/issues/4028
https://github.com/huggingface/datasets/pull/4028
4,028
Fix docs on audio feature installation
closed
1
2022-03-25T16:55:11
2022-03-31T16:20:47
2022-03-31T16:15:20
albertvillanova
[]
This PR: - Removes the explicit installation of `librosa` (this is installed with `pip install datasets[audio]` - Adds the warning for Linux users to install manually the non-Python package `libsndfile` - Explains that the installation of `torchaudio` is only necessary to support loading audio datasets containing MP3 audio files Related to #4000.
true
1,180,991,344
https://api.github.com/repos/huggingface/datasets/issues/4027
https://github.com/huggingface/datasets/issues/4027
4,027
ElasticSearch Indexing example: TypeError: __init__() missing 1 required positional argument: 'scheme'
closed
2
2022-03-25T16:22:28
2022-04-07T10:29:52
2022-03-28T07:58:56
MoritzLaurer
[ "bug", "duplicate" ]
## Describe the bug I am following the example in the documentation for elastic search step by step (on google colab): https://huggingface.co/docs/datasets/faiss_es#elasticsearch ``` from datasets import load_dataset squad = load_dataset('crime_and_punish', split='train[:1000]') ``` When I run the line: `squad.add_elasticsearch_index("context", host="localhost", port="9200")` I get the error: `TypeError: __init__() missing 1 required positional argument: 'scheme'` ## Expected results No error message ## Actual results ``` TypeError Traceback (most recent call last) [<ipython-input-23-9205593edef3>](https://localhost:8080/#) in <module>() 1 import elasticsearch ----> 2 squad.add_elasticsearch_index("text", host="localhost", port="9200") 6 frames [/usr/local/lib/python3.7/dist-packages/elasticsearch/_sync/client/utils.py](https://localhost:8080/#) in host_mapping_to_node_config(host) 209 options["path_prefix"] = options.pop("url_prefix") 210 --> 211 return NodeConfig(**options) # type: ignore 212 213 TypeError: __init__() missing 1 required positional argument: 'scheme' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.0 - Platform: Linux, Google Colab - Python version: Google Colab (probably 3.7) - PyArrow version: ?
false
1,180,968,774
https://api.github.com/repos/huggingface/datasets/issues/4026
https://github.com/huggingface/datasets/pull/4026
4,026
Support streaming xtreme dataset for bucc18 config
closed
1
2022-03-25T16:00:40
2022-03-25T16:26:50
2022-03-25T16:21:52
albertvillanova
[]
Support streaming xtreme dataset for bucc18 config.
true
1,180,963,105
https://api.github.com/repos/huggingface/datasets/issues/4025
https://github.com/huggingface/datasets/issues/4025
4,025
Missing argument in precision/recall
closed
1
2022-03-25T15:55:52
2022-03-28T09:53:06
2022-03-28T09:53:06
Dref360
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** [`sklearn.metrics.precision_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html) accepts an argument `zero_division`, but it is not available in [precision Metric](https://github.com/huggingface/datasets/blob/master/metrics/precision/precision.py#L117) Same issue is present for Recall. **Describe the solution you'd like** Support for **kwargs or adding a new field for `zero_division`. **Describe alternatives you've considered** I could filter the warnings myself, but that is not ideal. **Additional context** I can make the requested changes if this is approved.
false
1,180,951,817
https://api.github.com/repos/huggingface/datasets/issues/4024
https://github.com/huggingface/datasets/pull/4024
4,024
Doc: image_process small tip
closed
2
2022-03-25T15:44:32
2022-03-31T15:35:35
2022-03-31T15:30:20
FrancescoSaverioZuppichini
[]
I've added a small tip in the `image_process` doc
true
1,180,840,399
https://api.github.com/repos/huggingface/datasets/issues/4023
https://github.com/huggingface/datasets/pull/4023
4,023
Replace yahoo_answers_topics data url
closed
2
2022-03-25T14:08:57
2022-03-28T10:12:56
2022-03-28T10:07:52
lhoestq
[]
I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive.
true
1,180,816,682
https://api.github.com/repos/huggingface/datasets/issues/4022
https://github.com/huggingface/datasets/pull/4022
4,022
Replace dbpedia_14 data url
closed
1
2022-03-25T13:47:21
2022-03-25T15:03:37
2022-03-25T14:58:49
lhoestq
[]
I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive.
true
1,180,805,092
https://api.github.com/repos/huggingface/datasets/issues/4021
https://github.com/huggingface/datasets/pull/4021
4,021
Fix `map` remove_columns on empty dataset
closed
1
2022-03-25T13:36:29
2022-03-29T13:41:31
2022-03-29T13:35:44
lhoestq
[]
On an empty dataset, the `remove_columns` parameter of `map` currently doesn't actually remove the columns: ```python >>> ds = datasets.load_dataset("glue", "rte") >>> ds_filtered = ds.filter(lambda x: x["label"] != -1) >>> ds_mapped = ds_filtered.map(lambda x: x, remove_columns=["label"]) >>> print(repr(ds_mapped.column_names)) { 'train': ['sentence1', 'sentence2', 'idx'], 'validation': ['sentence1', 'sentence2', 'idx'], 'test': ['sentence1', 'sentence2', 'label', 'idx'] } ``` I fixed this error and updated the tests
true
1,180,636,754
https://api.github.com/repos/huggingface/datasets/issues/4020
https://github.com/huggingface/datasets/pull/4020
4,020
Replace amazon_polarity data URL
closed
1
2022-03-25T10:50:57
2022-03-25T15:02:36
2022-03-25T14:57:41
lhoestq
[]
I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive.
true
1,180,628,293
https://api.github.com/repos/huggingface/datasets/issues/4019
https://github.com/huggingface/datasets/pull/4019
4,019
Make yelp_polarity streamable
closed
2
2022-03-25T10:42:51
2022-03-25T15:02:19
2022-03-25T14:57:16
lhoestq
[]
It was using `dl_manager.download_and_extract` on a TAR archive, which is not supported in streaming mode. I replaced this by `dl_manager.iter_archive`
true
1,180,622,816
https://api.github.com/repos/huggingface/datasets/issues/4018
https://github.com/huggingface/datasets/pull/4018
4,018
Replace yelp_review_full data url
closed
1
2022-03-25T10:37:18
2022-03-25T15:01:02
2022-03-25T14:56:10
lhoestq
[]
I replaced the Google Drive URL of the Yelp review dataset by the FastAI one, since we've had some issues with Google Drive. Close https://github.com/huggingface/datasets/issues/4005
true
1,180,595,160
https://api.github.com/repos/huggingface/datasets/issues/4017
https://github.com/huggingface/datasets/pull/4017
4,017
Support streaming scan dataset
closed
1
2022-03-25T10:11:28
2022-03-25T12:08:55
2022-03-25T12:03:52
albertvillanova
[]
null
true
1,180,557,828
https://api.github.com/repos/huggingface/datasets/issues/4016
https://github.com/huggingface/datasets/pull/4016
4,016
Support streaming blimp dataset
closed
1
2022-03-25T09:39:10
2022-03-25T11:19:18
2022-03-25T11:14:13
albertvillanova
[]
null
true
1,180,510,856
https://api.github.com/repos/huggingface/datasets/issues/4015
https://github.com/huggingface/datasets/issues/4015
4,015
Can not correctly parse the classes with imagefolder
closed
2
2022-03-25T08:51:17
2022-03-28T01:02:03
2022-03-25T09:27:56
YiSyuanChen
[ "bug" ]
## Describe the bug I try to load my own image dataset with imagefolder, but the parsing of classes is incorrect. ## Steps to reproduce the bug I organized my dataset (ImageNet) in the following structure: ``` - imagenet/ - train/ - n01440764/ - ILSVRC2012_val_00000293.jpg - ...... - n01695060/ - ...... - val/ - n01440764/ - n01695060/ - ...... ``` At first, I followed the instructions from the Huggingface [example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification#using-your-own-data) to load my data as: ``` from datasets import load_dataset data_files = {'train': 'imagenet/train', 'val': 'imagenet/val'} ds = load_dataset("nateraw/image-folder", data_files=data_files, task="image-classification") ``` but it resulted following error (I mask my personal path as <PERSONAL_PATH>): ``` FileNotFoundError: Unable to find 'https://huggingface.co/datasets/nateraw/image-folder/resolve/main/imagenet/train' at <PERSONAL_PATH>/ImageNet/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main ``` Next, I followed a recent issue #3960 to load data as: ``` from datasets import load_dataset data_files = {'train': ['imagenet/train/**'], 'val': ['imagenet/val/**']} ds = load_dataset("imagefolder", data_files=data_files, task="image-classification") ``` and the data can be loaded without error as: (I copy val folder to train folder for illustration) ``` >>> ds DatasetDict({ train: Dataset({ features: ['image', 'labels'], num_rows: 50000 }) val: Dataset({ features: ['image', 'labels'], num_rows: 50000 }) }) ``` However, the parsed classes is wrong (should be 1000 classes): ``` >>> ds["train"].features {'image': Image(decode=True, id=None), 'labels': ClassLabel(num_classes=1, names=['val'], id=None)} ``` ## Expected results I expect that the "labels" in ds["train"].features should contain 1000 classes. ## Actual results The "labels" in ds["train"].features contains only 1 wrong class. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Ubuntu 18.04 - Python version: Python 3.7.12 - PyArrow version: 7.0.0
false
1,180,481,229
https://api.github.com/repos/huggingface/datasets/issues/4014
https://github.com/huggingface/datasets/pull/4014
4,014
Support streaming id_clickbait dataset
closed
1
2022-03-25T08:18:28
2022-03-25T08:58:31
2022-03-25T08:53:32
albertvillanova
[]
null
true
1,180,427,174
https://api.github.com/repos/huggingface/datasets/issues/4013
https://github.com/huggingface/datasets/issues/4013
4,013
Cannot preview "hazal/Turkish-Biomedical-corpus-trM"
closed
2
2022-03-25T07:12:02
2022-04-04T08:05:01
2022-03-25T14:16:11
hazalturkmen
[]
## Dataset viewer issue for '*hazal/Turkish-Biomedical-corpus-trM' **Link:** *https://huggingface.co/datasets/hazal/Turkish-Biomedical-corpus-trM* *I cannot see the dataset preview.* ``` Server Error Status code: 400 Exception: HTTPError Message: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/hazal/Turkish-Biomedical-corpus-trM?full=true ``` Am I the one who added this dataset ? Yes
false
1,180,350,083
https://api.github.com/repos/huggingface/datasets/issues/4012
https://github.com/huggingface/datasets/pull/4012
4,012
Rename wer to cer
closed
0
2022-03-25T05:06:05
2022-03-28T13:57:25
2022-03-28T13:57:25
pmgautam
[]
wer variable changed to cer in README file
true
1,179,885,965
https://api.github.com/repos/huggingface/datasets/issues/4011
https://github.com/huggingface/datasets/pull/4011
4,011
Fix SQuAD v2 metric docs on `references` format
closed
2
2022-03-24T18:27:10
2023-07-11T09:35:46
2023-07-11T09:35:15
bryant1410
[ "transfer-to-evaluate" ]
`references` it's not a list of dictionaries but a dictionary that has a list in its values.
true
1,179,848,036
https://api.github.com/repos/huggingface/datasets/issues/4010
https://github.com/huggingface/datasets/pull/4010
4,010
Fix None issue with Sequence of dict
closed
2
2022-03-24T17:58:59
2022-03-28T10:13:53
2022-03-28T10:08:40
lhoestq
[]
`Features.encode_example` currently fails if it contains a sequence if dict like `Sequence({"subcolumn": Value("int32")})` and if `None` is passed instead of the dict. ```python File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 1310, in encode_example return encode_nested_example(self, example) File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 973, in encode_nested_example return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)} File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 973, in <dictcomp> return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)} File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 998, in encode_nested_example for k, (sub_schema, sub_objs) in zip_dict(schema.feature, obj): File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/utils/py_utils.py", line 207, in zip_dict yield key, tuple(d[key] for d in dicts) File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/utils/py_utils.py", line 207, in <genexpr> yield key, tuple(d[key] for d in dicts) TypeError: 'NoneType' object is not subscriptable ``` I fixed this issue and updated the tests (this case was missing in the tests)
true
1,179,658,611
https://api.github.com/repos/huggingface/datasets/issues/4009
https://github.com/huggingface/datasets/issues/4009
4,009
AMI load_dataset error: sndfile library not found
closed
1
2022-03-24T15:13:38
2022-03-24T15:46:38
2022-03-24T15:17:29
i-am-neo
[ "bug" ]
## Describe the bug Getting error message when loading AMI dataset. ## Steps to reproduce the bug `python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])" ` ## Expected results A clear and concise description of the expected results. ## Actual results Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset use_auth_token=use_auth_token, File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare ) from None OSError: Cannot find data file. Original error: sndfile library not found ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11 - Python version: 3.7.3 - PyArrow version: 7.0.0
false
1,179,591,068
https://api.github.com/repos/huggingface/datasets/issues/4008
https://github.com/huggingface/datasets/pull/4008
4,008
Support streaming daily_dialog dataset
closed
1
2022-03-24T14:23:23
2022-03-24T15:29:01
2022-03-24T14:46:58
albertvillanova
[]
null
true