id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,251,933,091
https://api.github.com/repos/huggingface/datasets/issues/4417
https://github.com/huggingface/datasets/issues/4417
4,417
how to convert a dict generator into a huggingface dataset.
closed
18
2022-05-29T16:28:27
2022-09-16T14:44:19
2022-09-16T14:44:19
StephennFernandes
[ "question" ]
### Link _No response_ ### Description Hey there, I have used seqio to get a well distributed mixture of samples from multiple dataset. However the resultant output from seqio is a python generator dict, which I cannot produce back into huggingface dataset. The generator contains all the samples needed for training the model but I cannot convert it into a huggingface dataset. The code looks like this: ``` for ex in seqio_data: print(ex[“text”]) ``` I need to convert the seqio_data (generator) into huggingface dataset. the complete seqio code goes here: ``` import functools import seqio import tensorflow as tf import t5.data from datasets import load_dataset from t5.data import postprocessors from t5.data import preprocessors from t5.evaluation import metrics from seqio import FunctionDataSource, utils TaskRegistry = seqio.TaskRegistry def gen_dataset(split, shuffle=False, seed=None, column="text", dataset_params=None): dataset = load_dataset(**dataset_params) if shuffle: if seed: dataset = dataset.shuffle(seed=seed) else: dataset = dataset.shuffle() while True: for item in dataset[str(split)]: yield item[column] def dataset_fn(split, shuffle_files, seed=None, dataset_params=None): return tf.data.Dataset.from_generator( functools.partial(gen_dataset, split, shuffle_files, seed, dataset_params=dataset_params), output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_name) ) @utils.map_over_dataset def target_to_key(x, key_map, target_key): """Assign the value from the dataset to target_key in key_map""" return {**key_map, target_key: x} dataset_name = 'oscar-corpus/OSCAR-2109' subset= 'mr' dataset_params = {"path": dataset_name, "language":subset, "use_auth_token":True} dataset_shapes = None TaskRegistry.add( "oscar_marathi_corpus", source=seqio.FunctionDataSource( dataset_fn=functools.partial(dataset_fn, dataset_params=dataset_params), splits=("train", "validation"), caching_permitted=False, num_input_examples=dataset_shapes, ), preprocessors=[ functools.partial( target_to_key, key_map={ "targets": None, }, target_key="targets")], output_features={"targets": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)}, metric_fns=[] ) dataset = seqio.get_mixture_or_task("oscar_marathi_corpus").get_dataset( sequence_length=None, split="train", shuffle=True, num_epochs=1, shard_info=seqio.ShardInfo(index=0, num_shards=10), use_cached=False, seed=42 ) for _, ex in zip(range(5), dataset): print(ex['targets'].numpy().decode()) ``` ### Owner _No response_
false
1,251,875,763
https://api.github.com/repos/huggingface/datasets/issues/4416
https://github.com/huggingface/datasets/pull/4416
4,416
Add LCCC dataset
closed
6
2022-05-29T12:27:19
2022-06-15T10:28:59
2022-06-02T09:13:46
silverriver
[]
Hi, I am trying to add a new dataset lccc. All tests are passed.
true
1,251,002,981
https://api.github.com/repos/huggingface/datasets/issues/4415
https://github.com/huggingface/datasets/pull/4415
4,415
Update `dataset_infos.json` with new split info in `Dataset.push_to_hub` to avoid verification error
closed
1
2022-05-27T17:03:42
2022-06-07T12:42:25
2022-06-07T12:33:52
mariosasko
[]
Update `dataset_infos.json` when pushing splits one by one via `Dataset.push_to_hub` to avoid the splits verification error. TODO: ~~- [ ] handle token + `{Audio, Image}.embed_storage`~~ - [x] tests
true
1,250,546,888
https://api.github.com/repos/huggingface/datasets/issues/4414
https://github.com/huggingface/datasets/pull/4414
4,414
Rename DatasetBuilder config_name
closed
1
2022-05-27T09:28:02
2022-05-31T15:07:21
2022-05-31T14:58:51
albertvillanova
[]
This PR renames the DatasetBuilder keyword argument `name` to `config_name` so that: - it avoids confusion with the attribute `DatasetBuilder.name`, which is different - it aligns with the Dataset property name `config_name`, defined in `DatasetInfoMixin.config_name` Other simpler possibility could be to rename it to just `config` instead. Please note I have only renamed this argument of DatasetBuilder because I think this refactoring has a low impact on users: we can assume this is not a public facing parameter, but private or related to the inners of our library. It would have a major impact to rename it also in: - load_dataset - load_dataset_builder: although this could also be assumed as inners... - in our CLI commands Besides the naming of `name`, I also find really confusing the naming of `path` in `load_dataset`. IMHO, they should have a more simpler and precise meaning (currently, they are too vague). I would propose (maybe for next major release): ``` load_dataset(dataset, config,... ``` instead of ``` load_dataset(path, name,... ```
true
1,250,259,822
https://api.github.com/repos/huggingface/datasets/issues/4413
https://github.com/huggingface/datasets/issues/4413
4,413
Dataset Viewer issue for ett
closed
3
2022-05-27T02:12:35
2022-06-15T07:30:46
2022-06-15T07:30:46
dgcnz
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/ett ### Description Timestamp is not JSON serializable. ``` Status code: 500 Exception: Status500Error Message: Type is not JSON serializable: Timestamp ``` ### Owner No
false
1,249,490,179
https://api.github.com/repos/huggingface/datasets/issues/4412
https://github.com/huggingface/datasets/pull/4412
4,412
Skip hidden files/directories in data files resolution and `iter_files`
closed
6
2022-05-26T12:10:28
2022-06-15T17:11:25
2022-06-01T13:04:16
mariosasko
[]
Fix #4115
true
1,249,462,390
https://api.github.com/repos/huggingface/datasets/issues/4411
https://github.com/huggingface/datasets/pull/4411
4,411
Update `_format_columns` in `remove_columns`
closed
20
2022-05-26T11:40:06
2022-06-14T19:05:37
2022-06-14T16:01:56
alvarobartt
[]
As explained at #4398, when calling `dataset.add_faiss_index` under certain conditions when calling a sequence of operations `cast_column`, `map`, and `remove_columns`, it fails as it's trying to look for already removed columns. So on, after testing some possible fixes, it seems that setting the dataset format right after removing the columns seems to be working fine, so I had to add a call to `.set_format` in the `remove_columns` function. Hope this helps!
true
1,249,148,457
https://api.github.com/repos/huggingface/datasets/issues/4410
https://github.com/huggingface/datasets/pull/4410
4,410
Remove Google Drive URL in spider dataset
closed
1
2022-05-26T06:17:35
2022-05-26T06:48:42
2022-05-26T06:40:12
albertvillanova
[]
The `spider` dataset is distributed under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode) license. Fix #4401.
true
1,249,083,179
https://api.github.com/repos/huggingface/datasets/issues/4409
https://github.com/huggingface/datasets/pull/4409
4,409
Update: add using pcm bytes (#4323)
closed
16
2022-05-26T04:26:36
2022-07-07T13:27:29
2022-07-07T13:16:09
YooSungHyun
[]
first of all, please look #4323 why i can not use {"path","array","sampling_rate"} because sf.write(format="wav") and sf.read(BytesIO) is changed my pcm data value maybe, i think wav got header but, pcm is not. and variable naming, pcm data is "byte" type. so, "array" name is not fair i think so, i use scipy lib and numpy (that is huggingface dependency) and refer to @lhoestq answered, 1. encode -> using sampling_rate and pcm byte -> wav style byte (scipy.wavfile.write to byte) 2. byte converting using fairseq style pcm audio read [FileAudioDataset](https://github.com/facebookresearch/fairseq/blob/main/fairseq/data/audio/raw_audio_dataset.py) 4. decode -> read wavfile.read that way is not screw up my pcm byte to float data, and another audio type(wav) safety please check!
true
1,248,687,574
https://api.github.com/repos/huggingface/datasets/issues/4408
https://github.com/huggingface/datasets/pull/4408
4,408
Update imagenet gate
closed
1
2022-05-25T20:32:19
2022-05-25T20:45:11
2022-05-25T20:36:47
lhoestq
[]
null
true
1,248,671,778
https://api.github.com/repos/huggingface/datasets/issues/4407
https://github.com/huggingface/datasets/issues/4407
4,407
Dataset Viewer issue for conll2012_ontonotesv5
closed
3
2022-05-25T20:18:33
2022-06-07T18:39:16
2022-06-07T18:39:16
jiangwangyi
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/conll2012_ontonotesv5 ### Description Dataset viewer outage. ### Owner No
false
1,248,626,622
https://api.github.com/repos/huggingface/datasets/issues/4406
https://github.com/huggingface/datasets/pull/4406
4,406
Improve language tag for PIAF dataset
closed
0
2022-05-25T19:41:55
2022-05-27T14:51:23
2022-05-27T14:51:23
lbourdois
[]
Hi, As pointed out by @lhoestq in this discussion (https://huggingface.co/datasets/asi/wikitext_fr/discussions/1), it is not yet possible to edit datasets outside of a namespace with the Hub PR feature and that you have to go through GitHub. This modification should allow better referencing since only the xx language tags are currently taken into account and not the xx-xx.
true
1,248,574,087
https://api.github.com/repos/huggingface/datasets/issues/4405
https://github.com/huggingface/datasets/issues/4405
4,405
[TypeError: Couldn't cast array of type] Cannot process dataset in v2.2.2
closed
1
2022-05-25T18:56:43
2022-06-07T14:27:20
2022-06-07T14:27:20
jiangwangyi
[ "bug" ]
## Describe the bug I am trying to process the [conll2012_ontonotesv5](https://huggingface.co/datasets/conll2012_ontonotesv5) dataset in `datasets` v2.2.2 and am running into a type error when casting the features. ## Steps to reproduce the bug ```python import os from typing import ( List, Dict, ) from collections import ( defaultdict, ) from dataclasses import ( dataclass, ) from datasets import ( load_dataset, ) @dataclass class ConllConverter: path: str name: str cache_dir: str def __post_init__( self, ): self.dataset = load_dataset( path=self.path, name=self.name, cache_dir=self.cache_dir, ) def convert( self, ): class_label = self.dataset["train"].features["sentences"][0]["named_entities"].feature # label_set = list(set([ # label.split("-")[1] if label != "O" else label for label in class_label.names # ])) def prepare_chunk(token, entity): assert len(token) == len(entity) # Sequence length length = len(token) # Variable used entity_chunk = defaultdict(list) idx = flag = 0 # While loop while idx < length: if entity[idx] == "O": flag += 1 idx += 1 else: iob_tp, lab_tp = entity[idx].split("-") assert iob_tp == "B" idx += 1 while idx < length and entity[idx].startswith("I-"): idx += 1 entity_chunk[lab_tp].append(token[flag: idx]) flag = idx entity_chunk = dict(entity_chunk) # for label in label_set: # if label != "O" and label not in entity_chunk.keys(): # entity_chunk[label] = None return entity_chunk def prepare_features( batch: Dict[str, List], ) -> Dict[str, List]: sentence = [ sent for doc_sent in batch["sentences"] for sent in doc_sent ] feature = { "sentence": list(), } for sent in sentence: token = sent["words"] entity = class_label.int2str(sent["named_entities"]) entity_chunk = prepare_chunk(token, entity) sent_feat = { "token": token, "entity": entity, "entity_chunk": entity_chunk, } feature["sentence"].append(sent_feat) return feature column_names = self.dataset.column_names["train"] dataset = self.dataset.map( function=prepare_features, with_indices=False, batched=True, batch_size=3, remove_columns=column_names, num_proc=1, ) dataset.save_to_disk( dataset_dict_path=os.path.join("data", self.path, self.name) ) if __name__ == "__main__": converter = ConllConverter( path="conll2012_ontonotesv5", name="english_v4", cache_dir="cache", ) converter.convert() ``` ## Expected results I want to use the dataset to perform NER task and to change the label list into a {Entity Type: list of spans} format. ## Actual results <details> <summary>Traceback</summary> ```python Traceback (most recent call last): | 0/81 [00:00<?, ?ba/s] File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 532, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 499, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/fingerprint.py", line 458, in wrapper out = func(self, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2751, in _map_single writer.write_batch(batch) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 503, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 230, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 198, in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper return func(array, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in cast_array_to_feature arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in <listcomp> arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper return func(array, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1844, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") TypeError: Couldn't cast array of type struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>> to {'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)} """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 110, in <module> converter.convert() File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 91, in convert dataset = self.dataset.map( File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 770, in map { File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 771, in <dictcomp> k: dataset.map( File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2459, in map transformed_shards[index] = async_result.get() File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 771, in get raise self._value TypeError: Couldn't cast array of type struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>> to {'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)} ``` </details> ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2 - Platform: Ubuntu 18.04 - Python version: 3.9.7 - PyArrow version: 7.0.0
false
1,248,572,899
https://api.github.com/repos/huggingface/datasets/issues/4404
https://github.com/huggingface/datasets/issues/4404
4,404
Dataset should have a `.name` field
closed
2
2022-05-25T18:56:08
2022-09-13T15:09:30
2022-06-16T10:47:53
f4hy
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** If building pipelines that can evaluate on more than one dataset, it would be nice to be able to log results of things like `Evaluating on {dataset.name}` or `results for {dataset.name} are: {results}` Without some way of concisely identifying a dataset from the dataset object, tools which might run on more than one dataset must be passed the dataset object _and_ the name/id of the dataset being used. **Describe the solution you'd like** The DatasetInfo class should have a `name` field which is the name of a dataset. then for a given dataset if it evolves in time the `version` can be updated but its different versions of the same dataset with a unique `name`. The name could then all be accessed by `dataset.name` **Describe alternatives you've considered** For my own purposes I am considering making `NamedDataset[Dataset]` where the subclass just has a .name field. **Additional context** My guess is that most usecases are not working with more than one dataset in a given pipeline so a name is not really needed. This has surprised me though as one of the advantages of a standard dataset interface is to be able to build pipelines which can be passed in a dataset and separate responsibilities of the dataset loading from the train or eval pipeline.
false
1,248,390,134
https://api.github.com/repos/huggingface/datasets/issues/4403
https://github.com/huggingface/datasets/pull/4403
4,403
Uncomment logging deactivation for ArrowBasedBuilder
closed
1
2022-05-25T16:46:15
2022-05-31T08:33:36
2022-05-31T08:25:02
thomasw21
[]
null
true
1,248,078,067
https://api.github.com/repos/huggingface/datasets/issues/4402
https://github.com/huggingface/datasets/pull/4402
4,402
Skip identical files in `push_to_hub` instead of overwriting
closed
1
2022-05-25T13:12:51
2022-05-25T15:16:36
2022-05-25T15:08:03
mariosasko
[]
Skip identical files instead of overwriting them to save bandwidth and circumvent (user-side/server-side) errors, which can arise when working with large datasets due to long-lasting HTTP connections, by repeating calls to `push_to_hub` to resume an upload. To be able to check if an upload can be resumed, this PR modifies the shard naming scheme from: ``` data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].parquet ``` to: ``` data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]-<SHARD_FINGERPRINT>.parquet ``` cc @LysandreJik
true
1,247,695,921
https://api.github.com/repos/huggingface/datasets/issues/4401
https://github.com/huggingface/datasets/issues/4401
4,401
"NonMatchingChecksumError" when importing 'spider' dataset
closed
2
2022-05-25T07:45:07
2022-05-26T06:40:12
2022-05-26T06:40:12
OmarAlaaeldein
[ "hosted-on-google-drive" ]
## Describe the bug When importing 'spider' dataset [https://huggingface.co/datasets/spider] an error occurs ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('spider') ``` ## Expected results Dataset object ## Actual results NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0'] ## Environment info - `datasets` version: 2.2.2 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.11 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
false
1,247,404,237
https://api.github.com/repos/huggingface/datasets/issues/4400
https://github.com/huggingface/datasets/issues/4400
4,400
load dataset wikitext-2-raw-v1 failed. Could not reach wikitext-2-raw-v1.py.
closed
1
2022-05-25T03:10:44
2022-10-24T06:10:27
2022-05-25T03:26:36
cailun01
[ "bug" ]
## Describe the bug Could not reach wikitext-2-raw-v1.py ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("wikitext-2-raw-v1") ``` ## Expected results Download `wikitext-2-raw-v1` dataset successfully. ## Actual results ``` File "load_datasets.py", line 13, in <module> load_dataset("wikitext-2-raw-v1") File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1715, in load_dataset **config_kwargs, File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1536, in load_dataset_builder data_files=data_files, File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1282, in dataset_module_factory raise e1 from None File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1224, in dataset_module_factory dynamic_modules_path=dynamic_modules_path, File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 559, in get_module local_path = self.download_loading_script(revision) File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 539, in download_loading_script return cached_path(file_path, download_config=download_config) File "/root/miniconda3/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 246, in cached_path download_desc=download_config.download_desc, File "/root/miniconda3/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 582, in get_from_cache raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})") ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.2.2/datasets/wikitext-2-raw-v1/wikitext-2-raw-v1.py (ReadTimeout(ReadTimeoutError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=100)",),)) ``` I tried to download wikitext-2-raw-v1.py by chrome and got: ![image](https://user-images.githubusercontent.com/20658907/170171595-0ca9f1da-c05a-4b57-861e-9530bfa3bdb9.png) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2 - Platform: CentOS 7 - Python version: 3.6 - PyArrow version: 3.0.0
false
1,246,948,299
https://api.github.com/repos/huggingface/datasets/issues/4399
https://github.com/huggingface/datasets/issues/4399
4,399
LocalDatasetModuleFactoryWithoutScript extracts invalid builder name
closed
5
2022-05-24T18:03:01
2022-09-12T15:30:43
2022-09-12T15:30:43
apohllo
[ "bug", "good first issue" ]
## Describe the bug Trying to load a local dataset raises an error indicating that the config builder has to have a name. No error should be reported, since the call is completly valid. ## Steps to reproduce the bug ```python load_dataset("./data/some-dataset/", name="some-name") ``` ## Expected results The dataset should be loaded. ## Actual results ``` Traceback (most recent call last): File "train_lquad.py", line 19, in <module> load(tokenize_target_function, tokenize_target_function, {}, tokenizer) File "train_lquad.py", line 14, in load dataset = load_dataset("./data/lquad/", name="lquad") File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/load.py", line 1708, in load_dataset builder_instance = load_dataset_builder( File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/load.py", line 1560, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/builder.py", line 269, in __init__ self.config, self.config_id = self._create_builder_config( File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/builder.py", line 403, in _create_builder_config raise ValueError(f"BuilderConfig must have a name, got {builder_config.name}") ValueError: BuilderConfig must have a name, got ``` ## Environment info - `datasets` version: 2.2.2 - Platform: Linux-4.18.0-348.20.1.el8_5.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.8.6 - PyArrow version: 8.0.0 - Pandas version: 1.4.2 The error is probably in line 795 in load.py: ``` builder_kwargs = { "hash": hash, "data_files": data_files, "name": os.path.basename(self.path), "base_path": self.path, **builder_kwargs, } ``` `os.path.basename` for a directory returns an empty string, rather than the name of the directory.
false
1,246,666,749
https://api.github.com/repos/huggingface/datasets/issues/4398
https://github.com/huggingface/datasets/issues/4398
4,398
Calling `cast_column`/`remove_columns` and a sequence of `map` operations ends up making `faiss` fail with `ValueError`
closed
4
2022-05-24T14:41:34
2022-06-14T16:01:56
2022-06-14T16:01:56
alvarobartt
[ "bug" ]
First of all, sorry in advance for the unclear title, but this bug is weird to explain (at least for me), so I tried my best to summarize all the information in this issue. ## Describe the bug Calling a certain combination of operations over a 🤗 `Dataset` and then trying to calculate the `faiss` index with `.add_faiss_index` ends up throwing an exception while trying to set the format back of a previously removed column. But this just happens over certain conditions... I'll present some scenarios below! ## Steps to reproduce the bug Assuming the following dataset named `sample.csv` with some IMDb data: ```csv id,title,summary 1877830,"The Batman","When a sadistic serial killer begins murdering key political figures in Gotham, Batman is forced to investigate the city's hidden corruption and question his family's involvement." 9419884,"Doctor Strange in the Multiverse of Madness","Doctor Strange teams up with a mysterious teenage girl from his dreams who can travel across multiverses, to battle multiple threats, including other-universe versions of himself, which threaten to wipe out millions across the multiverse. They seek help from Wanda the Scarlet Witch, Wong and others." 11138512,"The Northman","From visionary director Robert Eggers comes The Northman, an action-filled epic that follows a young Viking prince on his quest to avenge his father's murder." 1745960,"Top Gun: Maverick","After more than thirty years of service as one of the Navy's top aviators, Pete Mitchell is where he belongs, pushing the envelope as a courageous test pilot and dodging the advancement in rank that would ground him." ``` We'll be able to reproduce the bug using the following piece of code: ```python # Sample code to reproduce the bug from transformers import DPRContextEncoder, DPRContextEncoderTokenizer import torch torch.set_grad_enabled(False) ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") from datasets import load_dataset, Value ds = load_dataset("csv", data_files=["sample.csv"], split="train") ds = ds.cast_column("id", Value("int32")) # from `int64` to `int32` ds = ds.map(lambda x: {"inputs": f"{ctx_tokenizer.sep_token}".join(["title", "summary"])}) ds = ds.remove_columns(["title", "summary"]) def generate_embeddings(x): return {"embeddings": ctx_encoder(**ctx_tokenizer(x["inputs"], return_tensors="pt"))[0][0].numpy()} ds = ds.map(generate_embeddings) ds = ds.remove_columns("inputs") ds.add_faiss_index(column="embeddings") # It fails here! ``` The code above is an adaptation of https://huggingface.co/docs/datasets/faiss_es, for the sake of presenting the bug with a simple example. ## Expected results Ideally, the `faiss` index should be calculated over the 🤗 `Dataset` and no exception should be triggered. ## Actual results But what happens instead is that a `ValueError: Columns ['inputs'] not in the dataset. Current columns in the dataset: ['id', 'embeddings']`, which makes no sense as that column has been previously dropped. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2 - Platform: Linux-5.4.0-1074-azure-x86_64-with-glibc2.31 - Python version: 3.9.5 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
false
1,246,597,632
https://api.github.com/repos/huggingface/datasets/issues/4397
https://github.com/huggingface/datasets/pull/4397
4,397
Fix dependency on dill version
closed
1
2022-05-24T13:54:23
2022-10-26T08:45:37
2022-05-25T13:54:08
albertvillanova
[]
We had to make a hotfix by pinning dill: - #4380 because from version 0.3.5, our custom `save_function` pickling function was raising an exception: - #4379 This PR fixes this by implementing our custom `save_function` depending on the version of dill. CC: @anivegesana This PR needs first being merged: - [x] #4384 - so that a circular import is fixed It is also convenient to merge first: - [x] #4385
true
1,245,479,399
https://api.github.com/repos/huggingface/datasets/issues/4396
https://github.com/huggingface/datasets/pull/4396
4,396
Fix URL in gem dataset for totto config
closed
1
2022-05-23T17:16:12
2022-05-24T05:49:11
2022-05-24T05:41:00
albertvillanova
[]
As commented in: - https://github.com/huggingface/datasets/issues/4386#issuecomment-1134902372 CC: @StevenTang1998
true
1,245,436,486
https://api.github.com/repos/huggingface/datasets/issues/4395
https://github.com/huggingface/datasets/pull/4395
4,395
Add Pascal VOC dataset
closed
6
2022-05-23T16:34:05
2023-09-24T09:37:05
2022-10-03T09:36:56
nateraw
[ "dataset contribution" ]
This PR adds the Pascal VOC dataset in the same way TFDS has it added. I believe we can iterate on this dataset and in future versions include more data, such as segmentation masks, but for now I think it is a good idea to just add it the same way as TFDS to get a solid first version out there.
true
1,245,221,657
https://api.github.com/repos/huggingface/datasets/issues/4394
https://github.com/huggingface/datasets/issues/4394
4,394
trainer became extremely slow after reload dataset by `load_from_disk`
open
5
2022-05-23T14:04:37
2023-11-23T07:40:30
null
conan1024hao
[ "bug" ]
## Describe the bug Due to memory problem, I need to save my tokenized datasets locally by CPU and reload it by multi GPU for running training script. However, after I reload it by `load_from_disk` and start training, the speed is extremely slow. It says I need about 1500 hours with 8 A100 cards. Before this, I can run the whole script in one day with a single A100 card. Since I am try to pre-train a BERT, **my dataset is very large(29058165 rows)** ## Steps to reproduce the bug ```python tokenized_datasets.save_to_disk( "/pathto/dataset" ) tokenized_datasets = load_from_disk( "/pathto/dataset" ) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"] if training_args.do_train else None, eval_dataset=tokenized_datasets["validation"] if training_args.do_eval else None, tokenizer=tokenizer, data_collator=data_collator, ) train_result = trainer.train(resume_from_checkpoint=checkpoint) ``` ## Expected results Without the save and reload process, I only need about one day to run the whole script with one A100 card. ## Actual results ``` [INFO|trainer.py:1290] 2022-05-23 22:49:46,266 >> ***** Running training ***** [INFO|trainer.py:1291] 2022-05-23 22:49:46,266 >> Num examples = 29058165 [INFO|trainer.py:1292] 2022-05-23 22:49:46,266 >> Num Epochs = 5 [INFO|trainer.py:1293] 2022-05-23 22:49:46,266 >> Instantaneous batch size per device = 16 [INFO|trainer.py:1294] 2022-05-23 22:49:46,266 >> Total train batch size (w. parallel, distributed & accumulation) = 256 [INFO|trainer.py:1295] 2022-05-23 22:49:46,266 >> Gradient Accumulation steps = 2 [INFO|trainer.py:1296] 2022-05-23 22:49:46,266 >> Total optimization steps = 567540 0%| | 1/567540 [00:09<1544:49:04, 9.80s/it] 0%| | 2/567540 [00:17<1320:00:17, 8.37s/it] 0%| | 3/567540 [00:26<1393:10:17, 8.84s/it] 0%| | 4/567540 [00:34<1344:56:33, 8.53s/it] 0%| | 5/567540 [00:43<1359:36:12, 8.62s/it] ``` ## Environment info ``` torch 1.11.0+cu113 torchaudio 0.11.0+cu113 torchvision 0.12.0+cu113 transformers 4.18.0 datasets 2.2.2 ```
false
1,244,876,662
https://api.github.com/repos/huggingface/datasets/issues/4393
https://github.com/huggingface/datasets/pull/4393
4,393
Update CI deprecated legacy image
closed
1
2022-05-23T09:35:42
2022-05-23T10:08:28
2022-05-23T09:59:55
albertvillanova
[]
Now our CI still uses a deprecated legacy image: > You’re using a [deprecated Docker convenience image.](https://discuss.circleci.com/t/legacy-convenience-image-deprecation/41034) Upgrade to a next-gen Docker convenience image. This PR updates to next-generation convenience image. Related to: - #2955
true
1,244,859,971
https://api.github.com/repos/huggingface/datasets/issues/4392
https://github.com/huggingface/datasets/pull/4392
4,392
remove int documentation from logging docs
closed
1
2022-05-23T09:24:55
2022-05-23T15:16:55
2022-05-23T15:08:32
lvwerra
[]
Removes the `int` documentation from the [logging section](https://huggingface.co/docs/datasets/package_reference/logging_methods#levels) of the docs.
true
1,244,839,185
https://api.github.com/repos/huggingface/datasets/issues/4391
https://github.com/huggingface/datasets/pull/4391
4,391
Refactor column mappings for question answering datasets
closed
5
2022-05-23T09:13:14
2022-05-24T12:57:00
2022-05-24T12:48:48
lewtun
[]
This PR tweaks the keys in the metadata that are used to define the column mapping for question answering datasets. This is needed in order to faithfully reconstruct column names like `answers.text` and `answers.answer_start` from the keys in AutoTrain. As observed in https://github.com/huggingface/datasets/pull/4367 we cannot use periods `.` in the keys of the YAML tags, so a decision was made to use a flat mapping with underscores. For QA datasets, however, it's handy to be able to reconstruct the nesting -- hence this PR. cc @sashavor
true
1,244,835,877
https://api.github.com/repos/huggingface/datasets/issues/4390
https://github.com/huggingface/datasets/pull/4390
4,390
Fix metadata validation
closed
1
2022-05-23T09:11:20
2022-06-01T09:27:52
2022-06-01T09:19:25
albertvillanova
[]
Since Python 3.8, the typing module: - raises an AttributeError when trying to access `__args__` on any type, e.g.: `List.__args__` - provides the `get_args` function instead: `get_args(List)` This PR implements a fix for Python >=3.8 whereas maintaining backward compatibility.
true
1,244,693,690
https://api.github.com/repos/huggingface/datasets/issues/4389
https://github.com/huggingface/datasets/pull/4389
4,389
Fix bug in gem dataset for wiki_auto_asset_turk config
closed
1
2022-05-23T07:19:49
2022-05-23T10:38:26
2022-05-23T10:29:55
albertvillanova
[]
This PR fixes some URLs. Fix #4386.
true
1,244,645,158
https://api.github.com/repos/huggingface/datasets/issues/4388
https://github.com/huggingface/datasets/pull/4388
4,388
Set builder name from module instead of class
closed
1
2022-05-23T06:26:35
2022-05-25T05:24:43
2022-05-25T05:16:15
albertvillanova
[]
Now the builder name attribute is set from from the builder class name. This PR sets the builder name attribute from the module name instead. Some motivating reasons: - The dataset ID is relevant and unique among all datasets and this is directly related to the repository name, i.e., the name of the directory containing the dataset - The name of the module (i.e. the file containing the loading loading script) is already relevant for loading: it must have the same name as its containing directory (related to the dataset ID), as we search for it using its directory name - On the other hand, the name of the builder class is not relevant for loading: in our code, we just search for a class which is subclass of `DatasetBuilder` (independently of its name). We do not put any constraint on the naming of the builder class and indeed it can have a name completely different from its module/direcotry/dataset_id IMO it makes more sense to align the caching directory name with the dataset_id/directory/module name instead of the builder class name. Fix #4381.
true
1,244,147,817
https://api.github.com/repos/huggingface/datasets/issues/4387
https://github.com/huggingface/datasets/issues/4387
4,387
device/google/accessory/adk2012 - Git at Google
closed
0
2022-05-22T04:57:19
2022-05-23T06:36:27
2022-05-23T06:36:27
Aeckard45
[]
"git clone https://android.googlesource.com/device/google/accessory/adk2012" https://android.googlesource.com/device/google/accessory/adk2012/#:~:text=git%20clone%20https%3A//android.googlesource.com/device/google/accessory/adk2012
false
1,243,965,532
https://api.github.com/repos/huggingface/datasets/issues/4386
https://github.com/huggingface/datasets/issues/4386
4,386
Bug for wiki_auto_asset_turk from GEM
closed
7
2022-05-21T12:31:30
2022-05-24T05:55:52
2022-05-23T10:29:55
StevenTang1998
[ "bug" ]
## Describe the bug The script of wiki_auto_asset_turk for GEM may be out of date. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('gem', 'wiki_auto_asset_turk') ``` ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/load.py", line 1731, in load_dataset builder_instance.download_and_prepare( File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 640, in download_and_prepare self._download_and_prepare( File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 1158, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 707, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/tangtianyi/.cache/huggingface/modules/datasets_modules/datasets/gem/982a54473b12c6a6e40d4356e025fb7172a5bb2065e655e2c1af51f2b3cf4ca1/gem.py", line 538, in _split_generators dl_dir = dl_manager.download_and_extract(_URLs[self.config.name]) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 416, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 294, in download downloaded_path_or_paths = map_nested( File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 351, in map_nested mapped = [ File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 352, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 288, in _single_map_nested return function(data_struct) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 320, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 234, in cached_path output_path = get_from_cache( File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 579, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.orig ```
false
1,243,921,287
https://api.github.com/repos/huggingface/datasets/issues/4385
https://github.com/huggingface/datasets/pull/4385
4,385
Test dill
closed
4
2022-05-21T08:57:43
2022-05-25T08:30:13
2022-05-25T08:21:48
albertvillanova
[]
Regression test for future releases of `dill`. Related to #4379.
true
1,243,919,748
https://api.github.com/repos/huggingface/datasets/issues/4384
https://github.com/huggingface/datasets/pull/4384
4,384
Refactor download
closed
4
2022-05-21T08:49:24
2022-05-25T10:52:02
2022-05-25T10:43:43
albertvillanova
[]
This PR performs a refactoring of the download functionalities, by proposing a modular solution and moving them to their own package "download". Some motivating arguments: - understandability: from a logical partitioning of the library, it makes sense to have all download functionalities grouped together instead of scattered in a much larger directory containing many more different functionalities - abstraction: the level of abstraction of "download" (higher) is not the same as "utils" (lower); putting different levels of abstraction together, makes dependencies more intricate (potential circular dependencies) and the system more tightly coupled; when the levels of abstraction are clearly separated, the dependencies flow in a neat direction from higher to lower - architectural: "download" is a domain-specific functionality of our library/application (a dataset builder performs several actions: download, generate dataset and cache it); these functionalities are at the core of our library; on the other hand, "utils" are always a low-level set of functionalities, not directly related to our domain/business core logic (all libraries have "utils"), thus at the periphery of our lib architecture Also note that when a library is not architecturally designed following simple, neat, clean principles, this has a negative impact on extensibility, making more and more difficult to make enhancements. As a concrete example in this case, please see: https://app.circleci.com/pipelines/github/huggingface/datasets/12185/workflows/ff25a790-8e3f-45a1-aadd-9d79dfb73c4d/jobs/72860 - After an extension, a circular import is found - Diving into the cause of this circular import, see the dependency flow, which should be from higher to lower levels of abstraction: ``` ImportError while loading conftest '/home/circleci/datasets/tests/conftest.py'. tests/conftest.py:12: in <module> import datasets ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/__init__.py:37: in <module> from .arrow_dataset import Dataset, concatenate_datasets ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/arrow_dataset.py:59: in <module> from . import config ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/config.py:8: in <module> from .utils.logging import get_logger ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/__init__.py:30: in <module> from .download_manager import DownloadConfig, DownloadManager, DownloadMode ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/download_manager.py:39: in <module> from .py_utils import NestedDataStructure, map_nested, size_str ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/py_utils.py:608: in <module> if config.DILL_VERSION < version.parse("0.3.5"): E AttributeError: module 'datasets.config' has no attribute 'DILL_VERSION' ``` Imports: - datasets - Dataset: lower level than datasets - config: lower level than Dataset - logger: lower level than config - DownloadManager: !!! HIGHER level of abstraction than logger!! Why when importing logger we require importing DownloadManager?!? - Logically, it does not make sense - This is due to an error in the design/architecture of our library: - To import the logger, we need to import it from `.utils.logging` - To import `.utils.logging` we need to import `.utils` - The import of `.utils` require the import of all its submodules defined in `utils.__init__.py`, among them: `.utils.download_manager`! When putting `logging` and `download_manager` both inside `utils`, in order to import `logging` we need to import `download_manager` first: this is a strong coupling between modules and moreover between modules at different levels of abstraction (to import a lower level module, we require to import a higher level module). Additionally, it is clear that is makes no sense that in order to import `logging` we require to import `download_manager` first.
true
1,243,856,981
https://api.github.com/repos/huggingface/datasets/issues/4383
https://github.com/huggingface/datasets/issues/4383
4,383
L
closed
0
2022-05-21T03:47:58
2022-05-21T19:20:13
2022-05-21T19:20:13
AronCodes21
[ "bug" ]
## Describe the L L ## Expected L A clear and concise lmll Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
false
1,243,839,783
https://api.github.com/repos/huggingface/datasets/issues/4382
https://github.com/huggingface/datasets/issues/4382
4,382
First time trying
closed
0
2022-05-21T02:15:18
2022-05-21T19:20:44
2022-05-21T19:20:44
Aeckard45
[ "dataset request" ]
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,243,478,863
https://api.github.com/repos/huggingface/datasets/issues/4381
https://github.com/huggingface/datasets/issues/4381
4,381
Bug in caching 2 datasets both with the same builder class name
closed
2
2022-05-20T18:18:03
2022-06-02T08:18:37
2022-05-25T05:16:15
NouamaneTazi
[ "bug" ]
## Describe the bug The two datasets `mteb/mtop_intent` and `mteb/mtop_domain `use both the same cache folder `.cache/huggingface/datasets/mteb___mtop`. So if you first load `mteb/mtop_intent` then datasets will not load `mteb/mtop_domain`. If you delete this cache folder and flip the order how you load the two datasets , you will get the opposite datasets loaded (difference is here in terms of the label and label_text). ## Steps to reproduce the bug ```python import datasets dataset = datasets.load_dataset("mteb/mtop_intent", "en") print(dataset['train'][0]) dataset = datasets.load_dataset("mteb/mtop_domain", "en") print(dataset['train'][0]) ``` ## Expected results ``` Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop_intent/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 920.14it/s] {'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'} Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop_domain/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35) 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1307.59it/s] {'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 0, 'label_text': 'messaging'} ``` ## Actual results ``` Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 920.14it/s] {'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'} Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35) 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1307.59it/s] {'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.1 - Platform: macOS-12.1-arm64-arm-64bit - Python version: 3.9.12 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
false
1,243,183,054
https://api.github.com/repos/huggingface/datasets/issues/4380
https://github.com/huggingface/datasets/pull/4380
4,380
Pin dill
closed
1
2022-05-20T13:54:19
2022-06-13T10:03:52
2022-05-20T16:33:04
albertvillanova
[]
Hotfix #4379. CC: @sgugger
true
1,243,175,854
https://api.github.com/repos/huggingface/datasets/issues/4379
https://github.com/huggingface/datasets/issues/4379
4,379
Latest dill release raises exception
closed
8
2022-05-20T13:48:36
2022-05-21T15:53:26
2022-05-20T17:06:27
albertvillanova
[ "bug" ]
## Describe the bug As reported by @sgugger, latest dill release is breaking things with Datasets. ``` ______________ ExamplesTests.test_run_speech_recognition_seq2seq _______________ self = <multiprocess.pool.ApplyResult object at 0x7fa5981a1cd0>, timeout = None def get(self, timeout=None): self.wait(timeout) if not self.ready(): raise TimeoutError if self._success: return self._value else: > raise self._value E TypeError: '>' not supported between instances of 'NoneType' and 'float' ```
false
1,242,935,373
https://api.github.com/repos/huggingface/datasets/issues/4378
https://github.com/huggingface/datasets/pull/4378
4,378
Tidy up license metadata for google_wellformed_query, newspop, sick
closed
2
2022-05-20T10:16:12
2022-05-24T13:50:23
2022-05-24T13:10:27
leondz
[]
Amend three licenses on datasets to fit naming convention (lower case, cc licenses include sub-version number). I think that's it - everything else on datasets looks great & super-searchable now!
true
1,242,746,186
https://api.github.com/repos/huggingface/datasets/issues/4377
https://github.com/huggingface/datasets/pull/4377
4,377
Fix checksum and bug in irc_disentangle dataset
closed
1
2022-05-20T07:29:28
2022-05-20T09:34:36
2022-05-20T09:26:32
albertvillanova
[]
There was a bug in filepath segment: - wrong: `jkkummerfeld-irc-disentanglement-fd379e9` - right: `jkkummerfeld-irc-disentanglement-35f0a40` Also there was a bug in the checksum of the downloaded file. This PR fixes these issues. Fix partially #4376.
true
1,242,218,144
https://api.github.com/repos/huggingface/datasets/issues/4376
https://github.com/huggingface/datasets/issues/4376
4,376
irc_disentagle viewer error
closed
5
2022-05-19T19:15:16
2023-01-12T16:56:13
2022-06-02T08:20:00
labouz
[]
the dataviewer shows this message for "ubuntu" - "train", "test", and "validation" splits: ``` Server error Status code: 400 Exception: ValueError Message: Cannot seek streaming HTTP file ``` it appears to give the same message for the "channel_two" data as well. I get a Checksums error when using `load_data()` with this dataset. Even with the `download_mode` and `ignore_verifications` options set. i referenced the issue here: https://github.com/huggingface/datasets/issues/3807
false
1,241,921,147
https://api.github.com/repos/huggingface/datasets/issues/4375
https://github.com/huggingface/datasets/pull/4375
4,375
Support DataLoader with num_workers > 0 in streaming mode
closed
7
2022-05-19T15:00:31
2022-07-04T16:05:14
2022-06-10T20:47:27
lhoestq
[]
### Issue It's currently not possible to properly stream a dataset using multiple `torch.utils.data.DataLoader` workers: - the `TorchIterableDataset` can't be pickled and passed to the subprocesses: https://github.com/huggingface/datasets/issues/3950 - streaming extension is failing: https://github.com/huggingface/datasets/issues/3951 - `fsspec` doesn't work out of the box in subprocesses ### Solution in this PR I fixed these to enable passing an `IterableDataset` to a `torch.utils.data.DataLoader` with `num_workers > 0`. I also had to shard the `IterableDataset` to give each worker a shard, otherwise data would be duplicated. This is implemented in `TorchIterableDataset.__iter__` and uses the new `IterableDataset._iter_shard(shard_idx)` method I also had to do a few changes the patching that enable streaming in dataset scripts: - the patches are now always applied - not just for streaming mode. They're applied when a builder is instantiated - I improved it to also check for renamed modules or attributes (ex: pandas vs pd) - I grouped all the patches of pathlib.Path into a class `xPath`, so that `Path` outside of dataset scripts stay unchanged - otherwise I didn't change the content of the extended Path methods for streaming - I fixed a bug with the `pd.read_csv` patch, opening the file in "rb" mode was missing and causing some datasets to not work in streaming mode, and compression inference was missing ### A few details regarding `fsspec` in multiprocessing From https://github.com/fsspec/filesystem_spec/pull/963#issuecomment-1131709948 : > Non-async instances might be safe in the forked child, if they hold no open files/sockets etc.; I'm not sure any implementations pass this test! > If any async instance has been created, the newly forked processes must: > 1. discard references to locks, threads and event loops and make new ones > 2. not use any async fsspec instances from the parent process > 3. clear all class instance caches Therefore in a DataLoader's worker, I clear the reference to the loop and thread (1). We should be fine for 2 and 3 already since we don't use fsspec class instances from the parent process. Fix https://github.com/huggingface/datasets/issues/3950 Fix https://github.com/huggingface/datasets/issues/3951 TODO: - [x] fix tests
true
1,241,860,535
https://api.github.com/repos/huggingface/datasets/issues/4374
https://github.com/huggingface/datasets/issues/4374
4,374
extremely slow processing when using a custom dataset
closed
2
2022-05-19T14:18:05
2023-07-25T15:07:17
2023-07-25T15:07:16
StephennFernandes
[ "bug", "question" ]
## processing a custom dataset loaded as .txt file is extremely slow, compared to a dataset of similar volume from the hub I have a large .txt file of 22 GB which i load into HF dataset `lang_dataset = datasets.load_dataset("text", data_files="hi.txt")` further i use a pre-processing function to clean the dataset `lang_dataset["train"] = lang_dataset["train"].map( remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)` the following processing takes astronomical time to process, while hoging all the ram. similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data. `lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True)` the hours predicted to preprocess are as follows: huggingface hub dataset: 6.5 hrs custom loaded dataset: 7000 hrs note: both the datasets are almost actually same, just provided by different sources with has +/- some samples, only one is hosted on the HF hub and the other is downloaded in a text format. ## Steps to reproduce the bug ``` import datasets import psutil import sys import glob from fastcore.utils import listify import re import gc def remove_non_indic_sentences(example): tmp_ls = [] eng_regex = r'[. a-zA-Z0-9ÖÄÅöäå _.,!"\'\/$]*' for e in listify(example['text']): matches = re.findall(eng_regex, e) for match in (str(match).strip() for match in matches if match not in [""," ", " ", ",", " ,", ", ", " , "]): if len(list(match.split(" "))) > 2: e = re.sub(match," ",e,count=1) tmp_ls.append(e) gc.collect() example['clean_text'] = tmp_ls return example lang_dataset = datasets.load_dataset("text", data_files="hi.txt") lang_dataset["train"] = lang_dataset["train"].map( remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64) ## same thing work much faster when loading similar dataset from hub lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", split="train", use_auth_token=True) lang_dataset["train"] = lang_dataset["train"].map( remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64) ``` ## Actual results similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data. `lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True) **the hours predicted to preprocess are as follows:** huggingface hub dataset: 6.5 hrs custom loaded dataset: 7000 hrs **i even tried the following:** - sharding the large 22gb text files into smaller files and loading - saving the file to disk and then loading - using lesser num_proc - using smaller batch size - processing without batches ie : without `batched=True` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2.dev0 - Platform: Ubuntu 20.04 LTS - Python version: 3.9.7 - PyArrow version:8.0.0
false
1,241,769,310
https://api.github.com/repos/huggingface/datasets/issues/4373
https://github.com/huggingface/datasets/pull/4373
4,373
Remove links in docs to old dataset viewer
closed
1
2022-05-19T13:24:39
2022-05-20T15:24:28
2022-05-20T15:16:05
mariosasko
[]
Remove the links in the docs to the no longer maintained dataset viewer.
true
1,241,703,826
https://api.github.com/repos/huggingface/datasets/issues/4372
https://github.com/huggingface/datasets/pull/4372
4,372
Check if dataset features match before push in `DatasetDict.push_to_hub`
closed
1
2022-05-19T12:32:30
2022-05-20T15:23:36
2022-05-20T15:15:30
mariosasko
[]
Fix #4211
true
1,241,500,906
https://api.github.com/repos/huggingface/datasets/issues/4371
https://github.com/huggingface/datasets/pull/4371
4,371
Add missing language tags for udhr dataset
closed
1
2022-05-19T09:34:10
2022-06-08T12:03:24
2022-05-20T09:43:10
albertvillanova
[]
Related to #4362.
true
1,240,245,642
https://api.github.com/repos/huggingface/datasets/issues/4369
https://github.com/huggingface/datasets/pull/4369
4,369
Add redirect to dataset script in the repo structure page
closed
1
2022-05-18T17:05:33
2022-05-19T08:19:01
2022-05-19T08:10:51
lhoestq
[]
Following https://github.com/huggingface/hub-docs/pull/146 I added a redirection to the dataset scripts documentation in the repository structure page.
true
1,240,064,860
https://api.github.com/repos/huggingface/datasets/issues/4368
https://github.com/huggingface/datasets/pull/4368
4,368
Add long answer candidates to natural questions dataset
closed
18
2022-05-18T14:35:42
2022-07-26T20:30:41
2022-07-26T20:18:42
seirasto
[]
This is a modification of the Natural Questions dataset to include missing information specifically related to long answer candidates. (See here: https://github.com/google-research-datasets/natural-questions#long-answer-candidates). This information is important to ensure consistent comparison with prior work. It does not disturb the rest of the format . @lhoestq @albertvillanova
true
1,240,011,602
https://api.github.com/repos/huggingface/datasets/issues/4367
https://github.com/huggingface/datasets/pull/4367
4,367
Remove config names as yaml keys
closed
3
2022-05-18T13:59:24
2022-05-20T09:35:26
2022-05-20T09:27:19
lhoestq
[]
Many datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys. I fix this, I removed the tags separations per config name completely, and have a single flat YAML for all configurations. Dataset search doesn't use this info anyway. I removed all the config names used as YAML keys, and I moved them in under a new `config:` key. This is related to https://github.com/huggingface/datasets/pull/2362 (internal https://github.com/huggingface/moon-landing/issues/946). Also removing the dots in the YAML keys would allow us to do as in https://github.com/huggingface/datasets/pull/4302 which removes a hack that replaces all the dots by underscores in the YAML tags. I also added a test in the CI that checks that all the YAML tags to make sure that: - they can be parsed using a YAML parser - they contain only valid YAML tags like languages or task_ids
true
1,239,534,165
https://api.github.com/repos/huggingface/datasets/issues/4366
https://github.com/huggingface/datasets/issues/4366
4,366
TypeError: __init__() missing 1 required positional argument: 'scheme'
closed
1
2022-05-18T07:17:29
2022-05-18T16:36:22
2022-05-18T16:36:21
jffgitt
[ "duplicate" ]
"name" : "node-1", "cluster_name" : "elasticsearch", "cluster_uuid" : "", "version" : { "number" : "7.5.0", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "", "build_date" : "2019-11-26T01:06:52.518245Z", "build_snapshot" : false, "lucene_version" : "8.3.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" when I run the order: nohup python3 custom_service.pyc > service.log 2>&1& the log: nohup: 忽略输入 Traceback (most recent call last): File "/home/xfz/p3_custom_test/custom_service.py", line 55, in <module> File "/home/xfz/p3_custom_test/custom_service.py", line 48, in doInitialize File "custom_impl.py", line 286, in custom_setup File "custom_impl.py", line 127, in create_es_index File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/__init__.py", line 345, in __init__ ssl_show_warn=ssl_show_warn, File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 105, in client_node_configs node_configs = hosts_to_node_configs(hosts) File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 154, in hosts_to_node_configs node_configs.append(host_mapping_to_node_config(host)) File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 221, in host_mapping_to_node_config return NodeConfig(**options) # type: ignore TypeError: __init__() missing 1 required positional argument: 'scheme' [1]+ 退出 1 nohup python3 custom_service.pyc > service.log 2>&1 custom_service_pyc can't running
false
1,239,109,943
https://api.github.com/repos/huggingface/datasets/issues/4365
https://github.com/huggingface/datasets/pull/4365
4,365
Remove dots in config names
closed
2
2022-05-17T20:12:57
2023-09-24T10:02:53
2022-05-18T13:59:41
lhoestq
[]
20+ datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys. This is related to https://github.com/huggingface/datasets/pull/2362 (internal https://github.com/huggingface/moon-landing/issues/946). Also removing the dots in the config names would allow us to merge https://github.com/huggingface/datasets/pull/4302 which removes a hack that replaces all the dots by underscores in the YAML tags. I also added a test in the CI that checks that all the YAML tags to make sure that: - they can be parsed using a YAML parser - they contain only valid YAML tags like `languages` or `task_ids` - they contain valid config names (no invalid characters `<>:/\|?*.`)
true
1,238,976,106
https://api.github.com/repos/huggingface/datasets/issues/4364
https://github.com/huggingface/datasets/pull/4364
4,364
Support complex feature types as `features` in packaged loaders
closed
1
2022-05-17T17:53:23
2022-05-31T12:26:23
2022-05-31T12:16:32
mariosasko
[]
This PR adds `table_cast` to the packaged loaders to fix casting to the `Image`/`Audio`, `ArrayND` and `ClassLabel` types. If these types are not present in the `builder.config.features` dictionary, the built-in `pa.Table.cast` is used for better performance. Additionally, this PR adds `cast_storage` to `ClassLabel` to support the string to int conversion in `table_cast` and ensure that integer labels are in a valid range. Fix https://github.com/huggingface/datasets/issues/4210 This PR is also a solution for these (popular) discussions: https://discuss.huggingface.co/t/converting-string-label-to-int/2816 and https://discuss.huggingface.co/t/class-labels-for-custom-datasets/15130/2 TODO: * [x] tests
true
1,238,897,652
https://api.github.com/repos/huggingface/datasets/issues/4363
https://github.com/huggingface/datasets/issues/4363
4,363
The dataset preview is not available for this split.
closed
7
2022-05-17T16:34:43
2022-06-08T12:32:10
2022-06-08T09:26:56
roholazandie
[ "dataset-viewer" ]
I have uploaded the corpus developed by our lab in the speech domain to huggingface [datasets](https://huggingface.co/datasets/Roh/ryanspeech). You can read about the companion paper accepted in interspeech 2021 [here](https://arxiv.org/abs/2106.08468). The dataset works fine but I can't make the dataset preview work. It gives me the following error that I don't understand. Can you help me to begin debugging it? ``` Status code: 400 Exception: AttributeError Message: 'NoneType' object has no attribute 'split' ```
false
1,238,680,112
https://api.github.com/repos/huggingface/datasets/issues/4362
https://github.com/huggingface/datasets/pull/4362
4,362
Update dataset_infos for UDHN/udhr dataset
closed
5
2022-05-17T13:52:59
2022-06-08T19:20:11
2022-06-08T19:11:21
leondz
[]
Checksum update to `udhr` for issue #4361
true
1,238,671,931
https://api.github.com/repos/huggingface/datasets/issues/4361
https://github.com/huggingface/datasets/issues/4361
4,361
`udhr` doesn't load, dataset checksum mismatch
closed
0
2022-05-17T13:47:09
2022-06-08T19:11:21
2022-06-08T19:11:21
leondz
[ "bug" ]
## Describe the bug Loading `udhr` fails due to a checksum mismatch for some source files. Looks like both of the source files on unicode.org have changed: size + checksum in datasets repo: ``` (hfdev) leon@blade:~/datasets/datasets/udhr$ jq .default.download_checksums < dataset_infos.json { "https://unicode.org/udhr/assemblies/udhr_xml.zip": { "num_bytes": 2273633, "checksum": "0565fa62c2ff155b84123198bcc967edd8c5eb9679eadc01e6fb44a5cf730fee" }, "https://unicode.org/udhr/assemblies/udhr_txt.zip": { "num_bytes": 2107471, "checksum": "087b474a070dd4096ae3028f9ee0b30dcdcb030cc85a1ca02e143be46327e5e5" } } ``` size + checksum regenerated from current source files: ``` (hfdev) leon@blade:~/datasets/datasets/udhr$ rm dataset_infos.json (hfdev) leon@blade:~/datasets/datasets/udhr$ datasets-cli test --save_infos udhr.py Using custom data configuration default Testing builder 'default' (1/1) Downloading and preparing dataset udhn/default (download: 4.18 MiB, generated: 6.15 MiB, post-processed: Unknown size, total: 10.33 MiB) to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66... Dataset udhn downloaded and prepared to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66. Subsequent calls will reuse this data. 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 686.69it/s] Dataset Infos file saved at dataset_infos.json Test successful. (hfdev) leon@blade:~/datasets/datasets/udhr$ jq .default.download_checksums < dataset_infos.json { "https://unicode.org/udhr/assemblies/udhr_xml.zip": { "num_bytes": 2389690, "checksum": "a3350912790196c6e1b26bfd1c8a50e8575f5cf185922ecd9bd15713d7d21438" }, "https://unicode.org/udhr/assemblies/udhr_txt.zip": { "num_bytes": 2215441, "checksum": "cb87ecb25b56f34e4fd6f22b323000524fd9c06ae2a29f122b048789cf17e9fe" } } (hfdev) leon@blade:~/datasets/datasets/udhr$ ``` --- is unicode.org a sustainable hosting solution for this dataset? ## Steps to reproduce the bug ```python from datasets import load_dataset udhr = load_dataset("udhr") ``` ## Expected results That a Dataset object containing the UDHR data will be returned. ## Actual results ``` >>> d = load_dataset('udhr') Using custom data configuration default Downloading and preparing dataset udhn/default (download: 4.18 MiB, generated: 6.15 MiB, post-processed: Unknown size, total: 10.33 MiB) to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/leon/.local/lib/python3.9/site-packages/datasets/load.py", line 1731, in load_dataset builder_instance.download_and_prepare( File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 613, in download_and_prepare self._download_and_prepare( File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 1117, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 684, in _download_and_prepare verify_checksums( File "/home/leon/.local/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://unicode.org/udhr/assemblies/udhr_xml.zip', 'https://unicode.org/udhr/assemblies/udhr_txt.zip'] >>> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.1 commit/4110fb6034f79c5fb470cf1043ff52180e9c63b7 - Platform: Linux Ubuntu 20.04 - Python version: 3.9.12 - PyArrow version: 8.0.0
false
1,237,239,096
https://api.github.com/repos/huggingface/datasets/issues/4360
https://github.com/huggingface/datasets/pull/4360
4,360
Fix example in opus_ubuntu, Add license info
closed
2
2022-05-16T14:22:28
2022-06-01T13:06:07
2022-06-01T12:57:09
leondz
[]
This PR * fixes a typo in the example for the`opus_ubuntu` dataset where it's mistakenly referred to as `ubuntu` * adds the declared license info for this corpus' origin * adds an example instance * updates the data origin type
true
1,237,149,578
https://api.github.com/repos/huggingface/datasets/issues/4359
https://github.com/huggingface/datasets/pull/4359
4,359
Fix Version equality
closed
1
2022-05-16T13:19:26
2022-05-24T16:25:37
2022-05-24T16:17:14
albertvillanova
[]
I think `Version` equality should align with other similar cases in Python, like: ```python In [1]: "a" == 5, "a" == None Out[1]: (False, False) In [2]: "a" != 5, "a" != None Out[2]: (True, True) ``` With this PR, we will get: ```python In [3]: Version("1.0.0") == 5, Version("1.0.0") == None Out[3]: (False, False) In [4]: Version("1.0.0") != 5, Version("1.0.0") != None Out[4]: (True, True) ``` Note I found this issue when `doc-builder` tried to compare: ```python if param.default != inspect._empty ``` where `param.default` is an instance of `Version`.
true
1,237,147,692
https://api.github.com/repos/huggingface/datasets/issues/4358
https://github.com/huggingface/datasets/issues/4358
4,358
Missing dataset tags and sections in some dataset cards
open
2
2022-05-16T13:18:16
2022-05-30T15:36:52
null
sashavor
[ "bug" ]
Summary of CircleCI errors for different dataset metadata: - **BoolQ**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids' - **Conllpp**: expected some content in section `Citation Information` but it is empty. - **GLUE**: 'annotations_creators', 'language_creators', 'source_datasets' :['unknown'] are not registered tags - **ConLL2003**: field 'task_ids': ['part-of-speech-tagging'] are not registered tags for 'task_ids' - **Hate_speech18:** Expected some content in section `Data Instances` but it is empty, Expected some content in section `Data Splits` but it is empty - **Jjigsaw_toxicity_pred**: `Citation Information` but it is empty. - **LIAR** : `Data Instances`,`Data Fields`, `Data Splits`, `Citation Information` are empty. - **MSRA NER** : Dataset Summary`, `Data Instances`, `Data Fields`, `Data Splits`, `Citation Information` are empty. - **sem_eval_2010_task_8**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids' - **sms_spam**: `Data Instances` and`Data Splits` are empty. - **Quora** : Expected some content in section `Citation Information` but it is empty, missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids' - **sentiment140**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
false
1,237,037,069
https://api.github.com/repos/huggingface/datasets/issues/4357
https://github.com/huggingface/datasets/pull/4357
4,357
Fix warning in push_to_hub
closed
1
2022-05-16T11:50:17
2022-05-16T15:18:49
2022-05-16T15:10:41
albertvillanova
[]
Fix warning: ``` FutureWarning: 'shard_size' was renamed to 'max_shard_size' in version 2.1.1 and will be removed in 2.4.0. ```
true
1,236,846,308
https://api.github.com/repos/huggingface/datasets/issues/4356
https://github.com/huggingface/datasets/pull/4356
4,356
Fix dataset builder default version
closed
2
2022-05-16T09:05:10
2022-05-30T13:56:58
2022-05-30T13:47:54
albertvillanova
[]
Currently, when using a custom config (subclass of `BuilderConfig`), default version set at the builder level is ignored: we must set default version in the custom config class. However, when loading a dataset with `config_kwargs` (for a configuration not present in `BUILDER_CONFIGS`), the default version set in the custom config is ignored and "0.0.0" is used instead: ```python ds = load_dataset("wikipedia", language="co", date="20220501", beam_runner="DirectRunner") ``` generates the following config: ```python WikipediaConfig(name='20220501.co', version=0.0.0, data_dir=None, data_files=None, description='Wikipedia dataset for co, parsed from 20220501 dump.') ``` with version "0.0.0" instead of "2.0.0". See as a counter-example, when the config is present in `BUILDER_CONFIGS`: ```python ds = load_dataset("wikipedia", "20220301.fr", beam_runner="DirectRunner") ``` generates the following config: ```python WikipediaConfig(name='20220301.fr', version=2.0.0, data_dir=None, data_files=None, description='Wikipedia dataset for fr, parsed from 20220301 dump.') ``` with correct version "2.0.0", as set in the custom config class. The reason for this is that `DatasetBuilder` has a default VERSION ("0.0.0") that overwrites the default version set at the custom config class. This PR: - Removes the default VERSION at `DatasetBuilder` (set to None, so that the class attribute exists but it does not override the custom config default version). - Note that the `BuilderConfig` class already sets a default version = "0.0.0"; no need to pass this from the builder.
true
1,236,797,490
https://api.github.com/repos/huggingface/datasets/issues/4355
https://github.com/huggingface/datasets/pull/4355
4,355
Fix warning in upload_file
closed
1
2022-05-16T08:21:31
2022-05-16T11:28:02
2022-05-16T11:19:57
albertvillanova
[]
Fix warning: ``` FutureWarning: Pass path_or_fileobj='...' as keyword args. From version 0.7 passing these as positional arguments will result in an error ```
true
1,236,404,383
https://api.github.com/repos/huggingface/datasets/issues/4354
https://github.com/huggingface/datasets/issues/4354
4,354
Problems with WMT dataset
closed
6
2022-05-15T20:58:26
2022-07-11T14:54:02
2022-07-11T14:54:01
eldarkurtic
[ "bug", "dataset bug" ]
## Describe the bug I am trying to load WMT15 dataset and to define which data-sources to use for train/validation/test splits, but unfortunately it seems that the official documentation at [https://huggingface.co/datasets/wmt15#:~:text=Versions%20exists%20for,wmt_translate%22%2C%20config%3Dconfig)](https://huggingface.co/datasets/wmt15#:~:text=Versions%20exists%20for,wmt_translate%22%2C%20config%3Dconfig)) doesn't work anymore. ## Steps to reproduce the bug ```shell >>> import datasets >>> a = datasets.translate.wmt.WmtConfig() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'datasets' has no attribute 'translate' >>> a = datasets.wmt.WmtConfig() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'datasets' has no attribute 'wmt' ``` ## Expected results To load WMT15 with given data-sources. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux-5.10.0-10-amd64-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
false
1,236,092,176
https://api.github.com/repos/huggingface/datasets/issues/4353
https://github.com/huggingface/datasets/pull/4353
4,353
Don't strip proceeding hyphen
closed
1
2022-05-14T18:25:29
2022-05-16T18:51:38
2022-05-16T13:52:11
JohnGiorgi
[]
Closes #4320.
true
1,236,086,170
https://api.github.com/repos/huggingface/datasets/issues/4352
https://github.com/huggingface/datasets/issues/4352
4,352
When using `dataset.map()` if passed `Features` types do not match what is returned from the mapped function, execution does not except in an obvious way
open
1
2022-05-14T17:55:15
2022-05-16T15:09:17
null
plamb-viso
[ "bug" ]
## Describe the bug Recently I was trying to using `.map()` to preprocess a dataset. I defined the expected Features and passed them into `.map()` like `dataset.map(preprocess_data, features=features)`. My expected `Features` keys matched what came out of `preprocess_data`, but the types i had defined for them did not match the types that came back. Because of this, i ended up in tracebacks deep inside arrow_dataset.py and arrow_writer.py with exceptions that [did not make clear what the problem was](https://github.com/huggingface/datasets/issues/4349). In short i ended up with overflows and the OS killing processes when Arrow was attempting to write. It wasn't until I dug into `def write_batch` and the loop that loops over cols that I figured out what was going on. It seems like `.map()` could set a boolean that it's checked that for at least 1 instance from the dataset, the returned data's types match the types provided by the `features` param and error out with a clear exception if they don't. This would make the cause of the issue much more understandable and save people time. This could be construed as a feature but it feels more like a bug to me. ## Steps to reproduce the bug I don't have explicit code to repro the bug, but ill show an example Code prior to the fix: ```python def preprocess(examples): # returns an encoded data dict with keys that match the features, but the types do not match ... def get_encoded_data(data): dataset = Dataset.from_pandas(data) unique_labels = data['audit_type'].unique().tolist() features = Features({ 'image': Array3D(dtype="uint8", shape=(3, 224, 224))), 'input_ids': Sequence(feature=Value(dtype='int64'))), 'attention_mask': Sequence(Value(dtype='int64'))), 'token_type_ids': Sequence(Value(dtype='int64'))), 'bbox': Array2D(dtype="int64", shape=(512, 4))), 'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels), }) encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names) ``` The Features set that fixed it: ```python features = Features({ 'image': Sequence(Array3D(dtype="uint8", shape=(3, 224, 224))), 'input_ids': Sequence(Sequence(feature=Value(dtype='int64'))), 'attention_mask': Sequence(Sequence(Value(dtype='int64'))), 'token_type_ids': Sequence(Sequence(Value(dtype='int64'))), 'bbox': Sequence(Array2D(dtype="int64", shape=(512, 4))), 'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels), }) ``` The difference between my original code (which was based on documentation) and the working code is the addition of the `Sequence(...)` to 4/5 features as I am working with paginated data and the doc examples are not. ## Expected results Dataset.map() attempts to validate the data types for each Feature on the first iteration and errors out if they are not validated. ## Actual results Specify the actual results or traceback. Based on the value of `writer_batch_size`, execution errors out when Arrow attempts to write because the types do not match, though its error messages dont make this obvious Example errors: ``` OverflowError: There was an overflow with type <class 'list'>. Try to reduce writer_batch_size to have batches smaller than 2GB. (offset overflow while concatenating arrays) ``` ``` zsh: killed python doc_classification.py UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> datasets version: 2.1.0 Platform: macOS-12.2.1-arm64-arm-64bit Python version: 3.9.12 PyArrow version: 6.0.1 Pandas version: 1.4.2
false
1,235,950,209
https://api.github.com/repos/huggingface/datasets/issues/4351
https://github.com/huggingface/datasets/issues/4351
4,351
Add optional progress bar for .save_to_disk(..) and .load_from_disk(..) when working with remote filesystems
closed
1
2022-05-14T11:30:42
2022-12-14T18:22:59
2022-12-14T18:22:59
Rexhaif
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** When working with large datasets stored on remote filesystems(such as s3), the process of uploading a dataset could take really long time. For instance: I was uploading a re-processed version of wmt17 en-ru to my s3 bucket and it took like 35 minutes(and that's given that I have a fiber optic connection). The only output during that process was a progress bar for flattening indices and then ~35 minutes of complete silence. **Describe the solution you'd like** I want to be able to enable a progress bar when calling .save_to_disk(..) and .load_from_disk(..), it would track either amount of bytes sent/received or amount of records written/loaded, and will give some ETA. Basically just tqdm. **Describe alternatives you've considered** - Save dataset to tmp folder at the disk and then upload it using custom wrapper over botocore, which will work with progress bar, like [this](https://alexwlchan.net/2021/04/s3-progress-bars/).
false
1,235,505,104
https://api.github.com/repos/huggingface/datasets/issues/4350
https://github.com/huggingface/datasets/pull/4350
4,350
Add a new metric: CTC_Consistency
closed
1
2022-05-13T17:31:19
2022-05-19T10:23:04
2022-05-19T10:23:03
YEdenZ
[]
Add CTC_Consistency metric Do I also need to modify the `test_metric_common.py` file to make it run on test?
true
1,235,474,765
https://api.github.com/repos/huggingface/datasets/issues/4349
https://github.com/huggingface/datasets/issues/4349
4,349
Dataset.map()'s fails at any value of parameter writer_batch_size
closed
6
2022-05-13T16:55:12
2022-06-02T12:51:11
2022-05-14T15:08:08
plamb-viso
[ "bug" ]
## Describe the bug If the the value of `writer_batch_size` is less than the total number of instances in the dataset it will fail at that same number of instances. If it is greater than the total number of instances, it fails on the last instance. Context: I am attempting to fine-tune a pre-trained HuggingFace transformers model called LayoutLMv2. This model takes three inputs: document images, words and word bounding boxes. [The Processor for this model has two options](https://huggingface.co/docs/transformers/model_doc/layoutlmv2#usage-layoutlmv2processor), the default is passing a document to the Processor and allowing it to create images of the document and use PyTesseract to perform OCR and generate words/bounding boxes. The other option is to provide `revision="no_ocr"` to the pre-trained model which allows you to use your own OCR results (in my case, Amazon Textract) so you have to provide the image, words and bounding boxes yourself. I am using this second option which might be good context for the bug. I am using the Dataset.map() paradigm to create these three inputs, encode them and save the dataset. Note that my documents (data instances) on average are fairly large and can range from 1 page up to 300 pages. Code I am using is provided below ## Steps to reproduce the bug I do not have explicit sample code, but I will paste the code I'm using in case reading it helps. When `.map()` is called, the dataset has 2933 rows, many of which represent large pdf documents. ```python def get_encoded_data(data): dataset = Dataset.from_pandas(data) unique_labels = data['label'].unique() features = Features({ 'image': Array3D(dtype="int64", shape=(3, 224, 224)), 'input_ids': Sequence(feature=Value(dtype='int64')), 'attention_mask': Sequence(Value(dtype='int64')), 'token_type_ids': Sequence(Value(dtype='int64')), 'bbox': Array2D(dtype="int64", shape=(512, 4)), 'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels), }) encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names, writer_batch_size=dataset.num_rows+1) encoded_dataset.save_to_disk(TRAINING_DATA_PATH + ENCODED_DATASET_NAME) encoded_dataset.set_format(type="torch") return encoded_dataset ``` ```python PROCESSOR = LayoutLMv2Processor.from_pretrained(MODEL_PATH, revision="no_ocr", use_fast=False) def preprocess_data(examples): directory = os.path.join(FILES_PATH, examples['file_location']) images_dir = os.path.join(directory, PDF_IMAGE_DIR) textract_response_path = os.path.join(directory, 'textract.json') doc_meta_path = os.path.join(directory, 'doc_meta.json') textract_document = get_textract_document(textract_response_path, doc_meta_path) images, words, bboxes = get_doc_training_data(images_dir, textract_document) encoded_inputs = PROCESSOR(images, words, boxes=bboxes, padding="max_length", truncation=True) # https://github.com/NielsRogge/Transformers-Tutorials/issues/36 encoded_inputs["image"] = np.array(encoded_inputs["image"]) encoded_inputs["label"] = examples['label_id'] return encoded_inputs ``` ## Expected results My expectation is that `writer_batch_size` allows one to simply trade off performance and memory requirements, not that it must be a specific number for `.map()` to function correctly. ## Actual results If writer_batch_size is set to a value less than the number of rows, I get either: ``` OverflowError: There was an overflow with type <class 'list'>. Try to reduce writer_batch_size to have batches smaller than 2GB. (offset overflow while concatenating arrays) ``` or simply ``` zsh: killed python doc_classification.py UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown ``` If it is greater than the number of rows, i get the `zsh: killed` error above ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.1.0 - Platform: macOS-12.2.1-arm64-arm-64bit - Python version: 3.9.12 - PyArrow version: 6.0.1 - Pandas version: 1.4.2
false
1,235,432,976
https://api.github.com/repos/huggingface/datasets/issues/4348
https://github.com/huggingface/datasets/issues/4348
4,348
`inspect` functions can't fetch dataset script from the Hub
closed
2
2022-05-13T16:08:26
2022-06-09T10:26:06
2022-06-09T10:26:06
stevhliu
[ "bug" ]
The `inspect_dataset` and `inspect_metric` functions are unable to retrieve a dataset or metric script from the Hub and store it locally at the specified `local_path`: ```py >>> from datasets import inspect_dataset >>> inspect_dataset('rotten_tomatoes', local_path='path/to/my/local/folder') FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory. ```
false
1,235,318,064
https://api.github.com/repos/huggingface/datasets/issues/4347
https://github.com/huggingface/datasets/pull/4347
4,347
Support remote cache_dir
closed
6
2022-05-13T14:26:35
2022-05-25T16:35:23
2022-05-25T16:27:03
albertvillanova
[]
This PR implements complete support for remote `cache_dir`. Before, the support was just partial. This is useful to create datasets using Apache Beam (parallel data processing) builder with `cache_dir` in a remote bucket, e.g., for Wikipedia dataset.
true
1,235,067,062
https://api.github.com/repos/huggingface/datasets/issues/4346
https://github.com/huggingface/datasets/issues/4346
4,346
GH Action to build documentation never ends
closed
0
2022-05-13T10:44:44
2022-05-13T11:22:00
2022-05-13T11:22:00
albertvillanova
[ "bug" ]
## Describe the bug See: https://github.com/huggingface/datasets/runs/6418035586?check_suite_focus=true I finally forced the cancel of the workflow.
false
1,235,062,787
https://api.github.com/repos/huggingface/datasets/issues/4345
https://github.com/huggingface/datasets/pull/4345
4,345
Fix never ending GH Action to build documentation
closed
1
2022-05-13T10:40:10
2022-05-13T11:29:43
2022-05-13T11:22:00
albertvillanova
[]
There was an unclosed code block introduced by: - #4313 https://github.com/huggingface/datasets/pull/4313/files#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R538 This causes the "Make documentation" step in the "Build documentation" workflow to never finish. - I think this issue should also be addressed in the `doc-builder` lib. Fix #4346.
true
1,234,882,542
https://api.github.com/repos/huggingface/datasets/issues/4344
https://github.com/huggingface/datasets/pull/4344
4,344
Fix docstring in DatasetDict::shuffle
closed
0
2022-05-13T08:06:00
2022-05-25T09:23:43
2022-05-24T15:35:21
felixdivo
[]
I think due to #1626, the docstring contained this error ever since `seed` was added.
true
1,234,864,168
https://api.github.com/repos/huggingface/datasets/issues/4343
https://github.com/huggingface/datasets/issues/4343
4,343
Metrics documentation is not accessible in the datasets doc UI
closed
1
2022-05-13T07:46:30
2022-06-03T08:50:25
2022-06-03T08:50:25
fxmarty
[ "enhancement", "Metric discussion" ]
**Is your feature request related to a problem? Please describe.** Search for a metric name like "seqeval" yields no results on https://huggingface.co/docs/datasets/master/en/index . One needs to go look in `datasets/metrics/README.md` to find the doc. Even in the `README.md`, it can be hard to understand what the metric expects as an input, for example for `squad` there is a [key `id`](https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L42) documented only in the function doc but not in the `README.md`, and one needs to go look into the code to understand what the metric expects. **Describe the solution you'd like** Have the documentation for metrics appear as well in the doc UI, e.g. this https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L21-L63 I know there are plans to migrate metrics to the evaluate library, but just pointing this out.
false
1,234,743,765
https://api.github.com/repos/huggingface/datasets/issues/4342
https://github.com/huggingface/datasets/pull/4342
4,342
Fix failing CI on Windows for sari and wiki_split metrics
closed
0
2022-05-13T05:03:38
2022-05-13T05:47:42
2022-05-13T05:47:42
albertvillanova
[]
This PR adds `sacremoses` as explicit tests dependency (required by sari and wiki_split metrics). Before, this library was installed as a third-party dependency, but this is no longer the case for Windows. Fix #4341.
true
1,234,739,703
https://api.github.com/repos/huggingface/datasets/issues/4341
https://github.com/huggingface/datasets/issues/4341
4,341
Failing CI on Windows for sari and wiki_split metrics
closed
0
2022-05-13T04:55:17
2022-05-13T05:47:41
2022-05-13T05:47:41
albertvillanova
[ "bug" ]
## Describe the bug Our CI is failing from yesterday on Windows for metrics: sari and wiki_split ``` FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_sari - ... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_wiki_split ``` See: https://app.circleci.com/pipelines/github/huggingface/datasets/11928/workflows/79daa5e7-65c9-4e85-829b-00d2bfbd076a/jobs/71594
false
1,234,671,025
https://api.github.com/repos/huggingface/datasets/issues/4340
https://github.com/huggingface/datasets/pull/4340
4,340
Fix irc_disentangle dataset script
closed
1
2022-05-13T02:37:57
2022-05-24T15:37:30
2022-05-24T15:37:29
i-am-pad
[]
updated extracted dataset's repo's latest commit hash (included in tarball's name), and updated the related data_infos.json
true
1,234,496,289
https://api.github.com/repos/huggingface/datasets/issues/4339
https://github.com/huggingface/datasets/pull/4339
4,339
Dataset loader for the MSLR2022 shared task
closed
9
2022-05-12T21:23:41
2022-07-18T17:19:27
2022-07-18T16:58:34
JohnGiorgi
[]
This PR adds a dataset loader for the [MSLR2022 Shared Task](https://github.com/allenai/mslr-shared-task). Both the MS^2 and Cochrane datasets can be loaded with this dataloader: ```python from datasets import load_dataset ms2 = load_dataset("mslr2022", "ms2") cochrane = load_dataset("mslr2022", "cochrane") ``` Usage looks like: ```python >>> ms2 = load_dataset("mslr2022", "ms2", split="validation") >>> ms2.keys() dict_keys(['review_id', 'pmid', 'title', 'abstract', 'target', 'background', 'reviews_info']) >>> ms2[0].target 'Conclusions SC therapy is effective for PAH in pre clinical studies .\nThese results may help to st and ardise pre clinical animal studies and provide a theoretical basis for clinical trial design in the future .' ``` I have tested this works with the following command: ```bash datasets-cli test datasets/mslr2022 --save_infos --all_configs ``` However I have having a little trouble generating the dummy data ```bash datasets-cli dummy_data datasets/mslr2022 --auto_generate ``` errors out with the following stack trace: ``` Couldn't generate dummy file 'datasets/mslr2022/dummy/ms2/1.0.0/dummy_data/mslr_data.tar.gz/mslr_data/ms2/convert_to_cochrane.py'. Ignore that if this file is not useful for dummy data. Traceback (most recent call last): File "/Users/johngiorgi/.pyenv/versions/datasets/bin/datasets-cli", line 11, in <module> load_entry_point('datasets', 'console_scripts', 'datasets-cli')() File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/dummy_data.py", line 319, in run keep_uncompressed=self._keep_uncompressed, File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/dummy_data.py", line 361, in _autogenerate_dummy_data dataset_builder._prepare_split(split_generator, check_duplicate_keys=False) File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/builder.py", line 1146, in _prepare_split desc=f"Generating {split_info.name} split", File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/Users/johngiorgi/.cache/huggingface/modules/datasets_modules/datasets/mslr2022/b4becd2f52cf18255d4934d7154c2a1127fb393371b87b3c1fc2c8b35a777cea/mslr2022.py", line 149, in _generate_examples reviews_info_df = pd.read_csv(reviews_info_filepath, index_col=0) File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/util/_decorators.py", line 311, in wrapper return func(*args, **kwargs) File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 586, in read_csv return _read(filepath_or_buffer, kwds) File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 488, in _read return parser.read(nrows) File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 1047, in read index, columns, col_dict = self._engine.read(nrows) File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 224, in read chunks = self._reader.read_low_memory(nrows) File "pandas/_libs/parsers.pyx", line 801, in pandas._libs.parsers.TextReader.read_low_memory File "pandas/_libs/parsers.pyx", line 857, in pandas._libs.parsers.TextReader._read_rows File "pandas/_libs/parsers.pyx", line 843, in pandas._libs.parsers.TextReader._tokenize_rows File "pandas/_libs/parsers.pyx", line 1925, in pandas._libs.parsers.raise_parser_error pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at row 2 ``` I think this may have to do with unusual line terminators in the original data. When I open it in VSCode, it complains: ``` The file 'dev-inputs.csv' contains one or more unusual line terminator characters, like Line Separator (LS) or Paragraph Separator (PS). It is recommended to remove them from the file. This can be configured via `editor.unusualLineTerminators`. ``` Tagging the organizers of the shared task in case they want to sanity check this or add any info to the model card :) @lucylw @jayded
true
1,234,478,851
https://api.github.com/repos/huggingface/datasets/issues/4338
https://github.com/huggingface/datasets/pull/4338
4,338
Eval metadata Batch 4: Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full
closed
2
2022-05-12T21:02:08
2022-05-16T15:51:02
2022-05-16T15:42:59
sashavor
[]
Adding evaluation metadata for: - Tweet Eval - Tweets Hate Speech Detection - VCTK - Weibo NER - Wisesight Sentiment - XSum - Yahoo Answers Topics - Yelp Polarity - Yelp Review Full
true
1,234,470,083
https://api.github.com/repos/huggingface/datasets/issues/4337
https://github.com/huggingface/datasets/pull/4337
4,337
Eval metadata batch 3: Reddit, Rotten Tomatoes, SemEval 2010, Sentiment 140, SMS Spam, Snips, SQuAD, SQuAD v2, Timit ASR
closed
2
2022-05-12T20:52:02
2022-05-16T16:26:19
2022-05-16T16:18:30
sashavor
[]
Adding evaluation metadata for: - Reddit - Rotten Tomatoes - SemEval 2010 - Sentiment 140 - SMS Spam - Snips - SQuAD - SQuAD v2 - Timit ASR
true
1,234,446,174
https://api.github.com/repos/huggingface/datasets/issues/4336
https://github.com/huggingface/datasets/pull/4336
4,336
Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, Poem Sentiment
closed
3
2022-05-12T20:24:45
2022-05-16T16:25:00
2022-05-16T16:24:59
sashavor
[]
Adding evaluation metadata for : - Health Fact - Jigsaw Toxicity - LIAR - LJ Speech - MSRA NER - Multi News - NCBI Diseas - Poem Sentiment
true
1,234,157,123
https://api.github.com/repos/huggingface/datasets/issues/4335
https://github.com/huggingface/datasets/pull/4335
4,335
Eval metadata batch 1: BillSum, CoNLL2003, CoNLLPP, CUAD, Emotion, GigaWord, GLUE, Hate Speech 18, Hate Speech
closed
3
2022-05-12T15:28:16
2022-05-16T16:31:10
2022-05-16T16:23:09
sashavor
[]
Adding evaluation metadata for: - BillSum - CoNLL2003 - CoNLLPP - CUAD - Emotion - GigaWord - GLUE - Hate Speech 18 - Hate Speech Offensive
true
1,234,103,477
https://api.github.com/repos/huggingface/datasets/issues/4334
https://github.com/huggingface/datasets/pull/4334
4,334
Adding eval metadata for billsum
closed
0
2022-05-12T14:49:08
2023-09-24T10:02:46
2022-05-12T14:49:24
sashavor
[]
Adding eval metadata for billsum
true
1,234,038,705
https://api.github.com/repos/huggingface/datasets/issues/4333
https://github.com/huggingface/datasets/pull/4333
4,333
Adding eval metadata for Banking 77
closed
1
2022-05-12T14:05:05
2022-05-12T21:03:32
2022-05-12T21:03:31
sashavor
[]
Adding eval metadata for Banking 77
true
1,234,021,188
https://api.github.com/repos/huggingface/datasets/issues/4332
https://github.com/huggingface/datasets/pull/4332
4,332
Adding eval metadata for arabic speech corpus
closed
0
2022-05-12T13:51:38
2022-05-12T21:03:21
2022-05-12T21:03:20
sashavor
[]
Adding eval metadata for arabic speech corpus
true
1,234,016,110
https://api.github.com/repos/huggingface/datasets/issues/4331
https://github.com/huggingface/datasets/pull/4331
4,331
Adding eval metadata to Amazon Polarity
closed
0
2022-05-12T13:47:59
2022-05-12T21:03:14
2022-05-12T21:03:13
sashavor
[]
Adding eval metadata to Amazon Polarity
true
1,233,992,681
https://api.github.com/repos/huggingface/datasets/issues/4330
https://github.com/huggingface/datasets/pull/4330
4,330
Adding eval metadata to Allociné dataset
closed
0
2022-05-12T13:31:39
2022-05-12T21:03:05
2022-05-12T21:03:05
sashavor
[]
Adding eval metadata to Allociné dataset
true
1,233,991,207
https://api.github.com/repos/huggingface/datasets/issues/4329
https://github.com/huggingface/datasets/pull/4329
4,329
Adding eval metadata for AG News
closed
0
2022-05-12T13:30:32
2022-05-12T21:02:41
2022-05-12T21:02:40
sashavor
[]
Adding eval metadata for AG News
true
1,233,856,690
https://api.github.com/repos/huggingface/datasets/issues/4328
https://github.com/huggingface/datasets/pull/4328
4,328
Fix and clean Apache Beam functionality
closed
1
2022-05-12T11:41:07
2022-05-24T13:43:11
2022-05-24T13:34:32
albertvillanova
[]
null
true
1,233,840,020
https://api.github.com/repos/huggingface/datasets/issues/4327
https://github.com/huggingface/datasets/issues/4327
4,327
`wikipedia` pre-processed datasets
closed
2
2022-05-12T11:25:42
2022-08-31T08:26:57
2022-08-31T08:26:57
vpj
[ "bug" ]
## Describe the bug [Wikipedia](https://huggingface.co/datasets/wikipedia) dataset readme says that certain subsets are preprocessed. However it seems like they are not available. When I try to load them it takes a really long time, and it seems like it's processing them. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("wikipedia", "20220301.en") ``` ## Expected results To load the dataset ## Actual results Takes a very long time to load (after downloading) After `Downloading data files: 100%`. It takes hours and gets killed. Tried `wikipedia.simple` and it got processed after ~30mins.
false
1,233,818,489
https://api.github.com/repos/huggingface/datasets/issues/4326
https://github.com/huggingface/datasets/pull/4326
4,326
Fix type hint and documentation for `new_fingerprint`
closed
1
2022-05-12T11:05:08
2022-06-01T13:04:45
2022-06-01T12:56:18
fxmarty
[]
Currently, there are no type hints nor `Optional` for the argument `new_fingerprint` in several methods of `datasets.arrow_dataset.Dataset`. There was some documentation missing as well. Note that pylance is happy with the type hints, but pyright does not detect that `new_fingerprint` is set within the decorator. The modifications in this PR are fine since here https://github.com/huggingface/datasets/blob/aa743886221d76afb409d263e1b136e7a71fe2b4/src/datasets/fingerprint.py#L446-L454 for the non-inplace case we make sure to auto-generate a new fingerprint (as indicated in the doc).
true
1,233,812,191
https://api.github.com/repos/huggingface/datasets/issues/4325
https://github.com/huggingface/datasets/issues/4325
4,325
Dataset Viewer issue for strombergnlp/offenseval_2020, strombergnlp/polstance
closed
4
2022-05-12T10:59:08
2022-05-13T10:57:15
2022-05-13T10:57:02
leondz
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train ### Description The viewer isn't running for these two datasets. I left it overnight because a wait sometimes helps things get loaded, and the error messages have all gone, but the datasets are still turning up blank in viewer. Maybe it needs a bit more time. * https://huggingface.co/datasets/strombergnlp/polstance/viewer/PolStance/train * https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train While offenseval_2020 is gated w. prompt, the other gated previews I have run fine in Viewer, e.g. https://huggingface.co/datasets/strombergnlp/shaj , so I'm a bit stumped! ### Owner Yes
false
1,233,780,870
https://api.github.com/repos/huggingface/datasets/issues/4324
https://github.com/huggingface/datasets/issues/4324
4,324
Support >1 PWC dataset per dataset card
open
1
2022-05-12T10:29:07
2022-05-13T11:25:29
null
leondz
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Some datasets cover more than one dataset on PapersWithCode. For example, the OffensEval 2020 challenge involved five languages, and there's one dataset to cover all five datasets, [`strombergnlp/offenseval_2020`](https://huggingface.co/datasets/strombergnlp/offenseval_2020). However, the yaml `paperswithcode_id:` dataset card entry only supports one value; when multiple are added, the PWC link disappears from the dataset page. Because the link from a PapersWithCode dataset to a Hugging Face Hub entry can't be entered manually and seems to be scraped, this means end users don't have a way of getting a dataset reader link to appear on all the PWC datasets supported by one HF Hub Dataset reader. It's not super unusual to have papers introduce multiple parallel variants of a dataset and would be handy to reflect this, so e.g. dataset maintainers can DRY, and so dataset users can keep what they're doing simple. **Describe the solution you'd like** I'd like `paperswithcode_id:` to support lists and be able to connect with multiple PWC datasets. **Describe alternatives you've considered** De-normalising the datasets on HF Hub to create multiple readers for each variation on a task, i.e. instead of a single `offenseval_2020`, having `offenseval_2020_ar`, `offenseval_2020_da`, `offenseval_2020_gr`, ... **Additional context** Hope that's enough **Priority** Low
false
1,233,634,928
https://api.github.com/repos/huggingface/datasets/issues/4323
https://github.com/huggingface/datasets/issues/4323
4,323
Audio can not find value["bytes"]
closed
9
2022-05-12T08:31:58
2022-07-07T13:16:08
2022-07-07T13:16:08
YooSungHyun
[ "bug" ]
## Describe the bug I wrote down _generate_examples like: ![image](https://user-images.githubusercontent.com/34292279/168027186-2fe8b255-2cd8-4b9b-ab1e-8d5a7182979b.png) but where is the bytes? ![image](https://user-images.githubusercontent.com/34292279/168027330-f2496dd0-1d99-464c-b15c-bc57eee0415a.png) ## Expected results value["bytes"] is not None, so i can make datasets with bytes, not path ## bytes looks like: blah blah~~ \xfe\x03\x00\xfb\x06\x1c\x0bo\x074\x03\xaf\x01\x13\x04\xbc\x06\x8c\x05y\x05,\t7\x08\xaf\x03\xc0\xfe\xe8\xfc\x94\xfe\xb7\xfd\xea\xfa\xd5\xf9$\xf9>\xf9\x1f\xf8\r\xf5F\xf49\xf4\xda\xf5-\xf8\n\xf8k\xf8\x07\xfb\x18\xfd\xd9\xfdv\xfd"\xfe\xcc\x01\x1c\x04\x08\x04@\x04{\x06^\tf\t\x1e\x07\x8b\x06\x02\x08\x13\t\x07\x08 \x06g\x06"\x06\xa0\x03\xc6\x002\xff \xff\x1d\xff\x19\xfd?\xfb\xdb\xfa\xfc\xfa$\xfb}\xf9\xe5\xf7\xf9\xf7\xce\xf8.\xf9b\xf9\xc5\xf9\xc0\xfb\xfa\xfcP\xfc\xba\xfbQ\xfc1\xfe\x9f\xff\x12\x00\xa2\x00\x18\x02Z\x03\x02\x04\xb1\x03\xc5\x03W\x04\x82\x04\x8f\x04U\x04\xb6\x04\x10\x05{\x04\x83\x02\x17\x01\x1d\x00\xa0\xff\xec\xfe\x03\xfe#\xfe\xc2\xfe2\xff\xe6\xfe\x9a\xfe~\x01\x91\x08\xb3\tU\x05\x10\x024\x02\xe4\x05\xa8\x07\xa7\x053\x07I\n\x91\x07v\x02\x95\xfd\xbb\xfd\x96\xff\x01\xfe\x1e\xfb\xbb\xf9S\xf8!\xf8\xf4\xf5\xd6\xf3\xf7\xf3l\xf4d\xf6l\xf7d\xf6b\xf7\xc1\xfa(\xfd\xcf\xfd*\xfdq\xfe\xe9\x01\xa8\x03t\x03\x17\x04B\x07\xce\t\t\t\xeb\x06\x0c\x07\x95\x08\x92\t\xbc\x07O\x06\xfb\x06\xd2\x06U\x04\x00\x02\x92\x00\xdc\x00\x84\x00 \xfeT\xfc\xf1\xfb\x82\xfc\x97\xfb}\xf9\x00\xf8_\xf8\x0b\xf9\xe5\xf8\xe2\xf7\xaa\xf8\xb2\xfa\x10\xfbl\xfa\xf5\xf9Y\xfb\xc0\xfd\xe8\xfe\xec\xfe1\x00\xad\x01\xec\x02E\x03\x13\x03\x9b\x03o\x04\xce\x04\xa8\x04\xb2\x04\x1b\x05\xc0\x05\xd2\x04\xe8\x02z\x01\xbe\x00\xae\x00\x07\x00$\xff|\xff\x8e\x00\x13\x00\x10\xff\x98\xff0\x05{\x0b\x05\t\xaa\x03\x82\x01n\x03 blah blah~~ that function not return None ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:2.2.1 - Platform:ubuntu 18.04 - Python version:3.6.9 - PyArrow version:6.0.1
false
1,233,596,947
https://api.github.com/repos/huggingface/datasets/issues/4322
https://github.com/huggingface/datasets/pull/4322
4,322
Added stratify option to train_test_split function.
closed
9
2022-05-12T08:00:31
2022-11-22T14:53:55
2022-05-25T20:43:51
nandwalritik
[]
This PR adds `stratify` option to `train_test_split` method. I took reference from scikit-learn's `StratifiedShuffleSplit` class for implementing stratified split and integrated the changes as were suggested by @lhoestq. It fixes #3452. @lhoestq Please review and let me know, if any changes are required.
true
1,233,273,351
https://api.github.com/repos/huggingface/datasets/issues/4321
https://github.com/huggingface/datasets/pull/4321
4,321
Adding dataset enwik8
closed
2
2022-05-11T23:25:02
2022-06-01T14:27:30
2022-06-01T14:04:06
HallerPatrick
[]
Because I regularly work with enwik8, I would like to contribute the dataset loader 🤗
true
1,233,208,864
https://api.github.com/repos/huggingface/datasets/issues/4320
https://github.com/huggingface/datasets/issues/4320
4,320
Multi-news dataset loader attempts to strip wrong character from beginning of summaries
closed
2
2022-05-11T21:36:41
2022-05-16T13:52:10
2022-05-16T13:52:10
JohnGiorgi
[ "bug" ]
## Describe the bug The `multi_news.py` data loader has [a line which attempts to strip `"- "` from the beginning of summaries](https://github.com/huggingface/datasets/blob/aa743886221d76afb409d263e1b136e7a71fe2b4/datasets/multi_news/multi_news.py#L97). The actual character in the multi-news dataset, however, is `"– "`, which is different, e.g. `"– " != "- "`. I would have just opened a PR to fix the mistake, but I am wondering what the motivation for stripping this character is? AFAICT most approaches just leave it in, e.g. the current SOTA on this dataset, [PRIMERA](https://huggingface.co/allenai/PRIMERA-multinews) (you can see its in the generated summaries of the model in their [example notebook](https://github.com/allenai/PRIMER/blob/main/Evaluation_Example.ipynb)). ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
false
1,232,982,023
https://api.github.com/repos/huggingface/datasets/issues/4319
https://github.com/huggingface/datasets/pull/4319
4,319
Adding eval metadata for ade v2
closed
1
2022-05-11T17:36:20
2022-05-12T13:29:51
2022-05-12T13:22:19
sashavor
[]
Adding metadata to allow evaluation
true
1,232,905,488
https://api.github.com/repos/huggingface/datasets/issues/4318
https://github.com/huggingface/datasets/pull/4318
4,318
Don't check f.loc in _get_extraction_protocol_with_magic_number
closed
1
2022-05-11T16:27:09
2022-05-11T16:57:02
2022-05-11T16:46:31
lhoestq
[]
`f.loc` doesn't always exist for file-like objects in python. I removed it since it was not necessary anyway (we always seek the file to 0 after reading the magic number) Fix https://github.com/huggingface/datasets/issues/4310
true
1,232,737,401
https://api.github.com/repos/huggingface/datasets/issues/4317
https://github.com/huggingface/datasets/pull/4317
4,317
Fix cnn_dailymail (dm stories were ignored)
closed
1
2022-05-11T14:25:25
2022-05-11T16:00:09
2022-05-11T15:52:37
lhoestq
[]
https://github.com/huggingface/datasets/pull/4188 introduced a bug in `datasets` 2.2.0: DailyMail stories are ignored when generating the dataset. I fixed that, and removed the google drive link (it has annoying quota limitations issues) We can do a patch release after this is merged
true