id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
869,017,977
https://api.github.com/repos/huggingface/datasets/issues/2272
https://github.com/huggingface/datasets/issues/2272
2,272
Bug in Dataset.class_encode_column
closed
1
2021-04-27T16:13:18
2021-04-30T12:54:27
2021-04-30T12:54:27
albertvillanova
[ "bug" ]
## Describe the bug All the rest of the columns except the one passed to `Dataset.class_encode_column` are discarded. ## Expected results All the original columns should be kept. This needs regression tests.
false
869,002,141
https://api.github.com/repos/huggingface/datasets/issues/2271
https://github.com/huggingface/datasets/issues/2271
2,271
Synchronize table metadata with features
closed
1
2021-04-27T15:55:13
2022-06-01T17:13:21
2022-06-01T17:13:21
albertvillanova
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** As pointed out in this [comment](https://github.com/huggingface/datasets/pull/2145#discussion_r621326767): > Metadata stored in the schema is just a redundant information regarding the feature types. It is used when calling Dataset.from_file to know which feature types to use. These metadata are stored in the schema of the pyarrow table by using `update_metadata_with_features`. However this something that's almost never tested properly. **Describe the solution you'd like** We should find a way to always make sure that the metadata (in `self.data.schema.metadata`) are synced with the actual feature types (in `self.info.features`).
false
868,913,660
https://api.github.com/repos/huggingface/datasets/issues/2270
https://github.com/huggingface/datasets/pull/2270
2,270
Fix iterable interface expected by numpy
closed
1
2021-04-27T14:35:56
2021-04-28T17:39:27
2021-04-28T17:39:27
albertvillanova
[]
Numpy expects the old iterable interface with `__getitem__` instead of `__iter__`.
true
868,878,468
https://api.github.com/repos/huggingface/datasets/issues/2269
https://github.com/huggingface/datasets/pull/2269
2,269
Fix query table with iterable
closed
0
2021-04-27T13:59:38
2021-04-27T14:21:57
2021-04-27T14:21:56
lhoestq
[]
The benchmark runs are failing on master because it tries to use an iterable to query the dataset. However there's currently an issue caused by the use of `np.array` instead of `np.fromiter` on the iterable. This PR fixes it
true
868,773,380
https://api.github.com/repos/huggingface/datasets/issues/2268
https://github.com/huggingface/datasets/pull/2268
2,268
Don't use pyarrow 4.0.0 since it segfaults when casting a sliced ListArray of integers
closed
3
2021-04-27T11:58:28
2021-06-12T12:44:49
2021-04-27T13:43:20
lhoestq
[]
This test `tests/test_table.py::test_concatenation_table_cast` segfaults with the latest update of pyarrow 4.0.0. Setting `pyarrow<4.0.0` for now. I'll open an issue on JIRA once I know more about the origin of the issue
true
868,291,129
https://api.github.com/repos/huggingface/datasets/issues/2267
https://github.com/huggingface/datasets/issues/2267
2,267
DatasetDict save load Failing test in 1.6 not in 1.5
open
6
2021-04-27T00:03:25
2021-05-28T15:27:34
null
timothyjlaurent
[ "bug" ]
## Describe the bug We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema. Downgrading to `>1.6` -- fixes the problem. ## Steps to reproduce the bug ```python ### Load a dataset dict from jsonl path = '/test/foo' ds_dict.save_to_disk(path) ds_from_disk = DatasetDict.load_from_disk(path). ## <-- this is where I see the error on 1.6 ``` ## Expected results Upgrading to 1.6 shouldn't break that test. We should be able to serialize to and from disk. ## Actual results ``` # Infer features if None inferred_features = Features.from_arrow_schema(arrow_table.schema) if self.info.features is None: self.info.features = inferred_features # Infer fingerprint if None if self._fingerprint is None: self._fingerprint = generate_fingerprint(self) # Sanity checks assert self.features is not None, "Features can't be None in a Dataset object" assert self._fingerprint is not None, "Fingerprint can't be None in a Dataset object" if self.info.features.type != inferred_features.type: > raise ValueError( "External features info don't match the dataset:\nGot\n{}\nwith type\n{}\n\nbut expected something like\n{}\nwith type\n{}".format( self.info.features, self.info.features.type, inferred_features, inferred_features.type ) ) E ValueError: External features info don't match the dataset: E Got E {'_input_hash': Value(dtype='int64', id=None), '_task_hash': Value(dtype='int64', id=None), '_view_id': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'encoding__ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'encoding__offsets': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'encoding__overflowing': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'encoding__tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'encoding__words': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'relations': [{'child': Value(dtype='int64', id=None), 'child_span': {'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None)}, 'color': Value(dtype='string', id=None), 'head': Value(dtype='int64', id=None), 'head_span': {'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None)}, 'label': Value(dtype='string', id=None)}], 'spans': [{'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'type': Value(dtype='string', id=None)}], 'text': Value(dtype='string', id=None), 'tokens': [{'disabled': Value(dtype='bool', id=None), 'end': Value(dtype='int64', id=None), 'id': Value(dtype='int64', id=None), 'start': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None), 'ws': Value(dtype='bool', id=None)}]} E with type E struct<_input_hash: int64, _task_hash: int64, _view_id: string, answer: string, encoding__ids: list<item: int64>, encoding__offsets: list<item: list<item: int64>>, encoding__overflowing: list<item: null>, encoding__tokens: list<item: string>, encoding__words: list<item: int64>, ner_ids: list<item: int64>, ner_labels: list<item: string>, relations: list<item: struct<child: int64, child_span: struct<end: int64, label: string, start: int64, token_end: int64, token_start: int64>, color: string, head: int64, head_span: struct<end: int64, label: string, start: int64, token_end: int64, token_start: int64>, label: string>>, spans: list<item: struct<end: int64, label: string, start: int64, text: string, token_end: int64, token_start: int64, type: string>>, text: string, tokens: list<item: struct<disabled: bool, end: int64, id: int64, start: int64, text: string, ws: bool>>> E E but expected something like E {'_input_hash': Value(dtype='int64', id=None), '_task_hash': Value(dtype='int64', id=None), '_view_id': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'encoding__ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'encoding__offsets': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'encoding__overflowing': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'encoding__tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'encoding__words': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'relations': [{'head': Value(dtype='int64', id=None), 'child': Value(dtype='int64', id=None), 'head_span': {'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}, 'child_span': {'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}, 'color': Value(dtype='string', id=None), 'label': Value(dtype='string', id=None)}], 'spans': [{'text': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'type': Value(dtype='string', id=None), 'label': Value(dtype='string', id=None)}], 'text': Value(dtype='string', id=None), 'tokens': [{'text': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'id': Value(dtype='int64', id=None), 'ws': Value(dtype='bool', id=None), 'disabled': Value(dtype='bool', id=None)}]} E with type E struct<_input_hash: int64, _task_hash: int64, _view_id: string, answer: string, encoding__ids: list<item: int64>, encoding__offsets: list<item: list<item: int64>>, encoding__overflowing: list<item: null>, encoding__tokens: list<item: string>, encoding__words: list<item: int64>, ner_ids: list<item: int64>, ner_labels: list<item: string>, relations: list<item: struct<head: int64, child: int64, head_span: struct<start: int64, end: int64, token_start: int64, token_end: int64, label: string>, child_span: struct<start: int64, end: int64, token_start: int64, token_end: int64, label: string>, color: string, label: string>>, spans: list<item: struct<text: string, start: int64, token_start: int64, token_end: int64, end: int64, type: string, label: string>>, text: string, tokens: list<item: struct<text: string, start: int64, end: int64, id: int64, ws: bool, disabled: bool>>> ../../../../../.virtualenvs/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:274: ValueError ``` ## Versions - Datasets: 1.6.1 - Python: 3.8.5 (default, Jan 26 2021, 10:01:04) [Clang 12.0.0 (clang-1200.0.32.2)] - Platform: macOS-10.15.7-x86_64-i386-64bit ```
false
867,864,353
https://api.github.com/repos/huggingface/datasets/issues/2266
https://github.com/huggingface/datasets/pull/2266
2,266
Make tests run faster
closed
3
2021-04-26T15:55:40
2021-04-29T10:00:13
2021-04-29T10:00:04
lhoestq
[]
From 7min to 2min to run pytest. Ideally we should keep the whole CI run time below 10min. In this PR I removed the remote tests that were never used. I also replaced nested parametrized tests with unit tests. This makes me think that we could still add more high level tests to check for a few combinations of parameters (but not all of them since there are too many of them). Let me know what you think Finally in another PR we can also separate in two circleci jobs: - the tests of the code code of the lib - the tests of the all the dataset/metric scripts.
true
867,490,646
https://api.github.com/repos/huggingface/datasets/issues/2265
https://github.com/huggingface/datasets/pull/2265
2,265
Update black
closed
0
2021-04-26T09:35:09
2021-04-26T09:47:48
2021-04-26T09:47:47
lhoestq
[]
Latest black version 21.4b0 requires to reformat most dataset scripts and also the core code of the lib. This makes the CI currently fail on master
true
867,476,228
https://api.github.com/repos/huggingface/datasets/issues/2264
https://github.com/huggingface/datasets/pull/2264
2,264
Fix memory issue in multiprocessing: Don't pickle table index
closed
5
2021-04-26T09:21:35
2021-04-26T10:30:28
2021-04-26T10:08:14
lhoestq
[]
The table index is currently being pickled when doing multiprocessing, which brings all the record batches of the dataset in memory. I fixed that by not pickling the index attributes. Therefore each process has to rebuild the index when unpickling the table. Fix issue #2256 We'll do a patch release asap !
true
867,420,912
https://api.github.com/repos/huggingface/datasets/issues/2263
https://github.com/huggingface/datasets/pull/2263
2,263
test data added, dataset_infos updated
closed
0
2021-04-26T08:27:18
2021-04-29T09:30:21
2021-04-29T09:30:20
bhavitvyamalik
[]
Fixes #2262. Thanks for pointing out issue with dataset @jinmang2!
true
867,325,351
https://api.github.com/repos/huggingface/datasets/issues/2262
https://github.com/huggingface/datasets/issues/2262
2,262
NewsPH NLI dataset script fails to access test data.
closed
1
2021-04-26T06:44:41
2021-04-29T09:32:03
2021-04-29T09:30:20
jinmang2
[ "dataset bug" ]
In Newsph-NLI Dataset (#1192), it fails to access test data. According to the script below, the download manager will download the train data when trying to download the test data. https://github.com/huggingface/datasets/blob/2a2dd6316af2cc7fdf24e4779312e8ee0c7ed98b/datasets/newsph_nli/newsph_nli.py#L71 If you download it according to the script above, you can see that train and test receive the same data as shown below. ```python >>> from datasets import load_dataset >>> newsph_nli = load_dataset(path="./datasets/newsph_nli.py") >>> newsph_nli DatasetDict({ train: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 420000 }) test: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 420000 }) validation: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 90000 }) }) >>> newsph_nli["train"][0] {'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).', 'label': 1, 'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'} >>> newsph_nli["test"][0] {'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).', 'label': 1, 'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'} ``` In local, I modified the code of the source as below and got the correct result. ```python 71 test_path = os.path.join(download_path, "test.csv") ``` ```python >>> from datasets import load_dataset >>> newsph_nli = load_dataset(path="./datasets/newsph_nli.py") >>> newsph_nli DatasetDict({ train: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 420000 }) test: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 9000 }) validation: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 90000 }) }) >>> newsph_nli["train"][0] {'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).', 'label': 1, 'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'} >>> newsph_nli["test"][0] {'hypothesis': '-- JAI (@JaiPaller) September 13, 2019', 'label': 1, 'premise': 'Pinag-iingat ng Konsulado ng Pilipinas sa Dubai ang publiko, partikular ang mga donor, laban sa mga scam na gumagamit ng mga charitable organization.'} ``` I don't have experience with open source pull requests, so I suggest that you reflect them in the source. Thank you for reading :)
false
867,088,818
https://api.github.com/repos/huggingface/datasets/issues/2261
https://github.com/huggingface/datasets/pull/2261
2,261
Improve ReadInstruction logic and update docs
closed
1
2021-04-25T19:07:26
2021-05-17T18:24:44
2021-05-17T16:48:57
mariosasko
[]
Improve ReadInstruction logic and docs.
true
866,961,697
https://api.github.com/repos/huggingface/datasets/issues/2260
https://github.com/huggingface/datasets/pull/2260
2,260
GooAQ dataset added
closed
1
2021-04-25T09:26:48
2021-05-07T08:36:17
2021-05-07T08:36:17
bhavitvyamalik
[]
@lhoestq here the dataset is stored with Git LFS. Should I add option for manual downloading of dataset using `git lfs pull` post repo cloning or can we accommodate this in the current `download_and_extract`?
true
866,880,092
https://api.github.com/repos/huggingface/datasets/issues/2259
https://github.com/huggingface/datasets/pull/2259
2,259
Add support for Split.ALL
closed
1
2021-04-25T01:45:42
2021-06-28T08:21:27
2021-06-28T08:21:27
mariosasko
[]
The title says it all.
true
866,870,588
https://api.github.com/repos/huggingface/datasets/issues/2258
https://github.com/huggingface/datasets/pull/2258
2,258
Fix incorrect update_metadata_with_features calls in ArrowDataset
closed
1
2021-04-25T00:48:38
2021-04-26T17:16:30
2021-04-26T16:54:04
mariosasko
[]
Fixes bugs in the `unpdate_metadata_with_features` calls (caused by changes in #2151)
true
866,755,203
https://api.github.com/repos/huggingface/datasets/issues/2257
https://github.com/huggingface/datasets/pull/2257
2,257
added metrics for CUAD
closed
3
2021-04-24T14:09:54
2021-04-29T09:53:38
2021-04-27T16:16:32
bhavitvyamalik
[]
For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https://arxiv.org/pdf/2103.06268.pdf). Please let me know if we require `exact_match` metric too here
true
866,708,609
https://api.github.com/repos/huggingface/datasets/issues/2256
https://github.com/huggingface/datasets/issues/2256
2,256
Running `datase.map` with `num_proc > 1` uses a lot of memory
closed
2
2021-04-24T09:56:20
2021-04-26T17:12:15
2021-04-26T17:12:15
roskoN
[ "bug" ]
## Describe the bug Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow. ## Steps to reproduce the bug ```python from datasets import load_dataset dstc8_datset = load_dataset("roskoN/dstc8-reddit-corpus", keep_in_memory=False) def _prepare_sample(batch): return {"input_ids": list(), "attention_mask": list()} for split_name, dataset_split in list(dstc8_datset.items()): print(f"Processing {split_name}") encoded_dataset_split = dataset_split.map( function=_prepare_sample, batched=True, num_proc=4, remove_columns=dataset_split.column_names, batch_size=10, writer_batch_size=10, keep_in_memory=False, ) print(encoded_dataset_split) path = f"./data/encoded_{split_name}" encoded_dataset_split.save_to_disk(path) ``` ## Expected results Memory usage should stay within reasonable boundaries. ## Actual results This is htop-output from running the provided script. ![image](https://user-images.githubusercontent.com/8143425/115954836-66954980-a4f3-11eb-8340-0153bdc3a475.png) ## Versions ``` - Datasets: 1.6.0 - Python: 3.8.8 (default, Apr 13 2021, 19:58:26) [GCC 7.3.0] - Platform: Linux-4.19.128-microsoft-standard-x86_64-with-glibc2.10 ``` Running on WSL2
false
866,242,892
https://api.github.com/repos/huggingface/datasets/issues/2255
https://github.com/huggingface/datasets/pull/2255
2,255
Task casting for text classification & question answering
closed
15
2021-04-23T16:00:41
2021-05-18T13:31:36
2021-05-18T13:31:35
SBrandeis
[]
This PR implements task preparation for a given task, in the continuation of #2143 Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines Edit by @lewtun: This PR implements support for the following tasks: * `text-classification` * `question-answering` The intended usage is as follows: ```python # Load a dataset with default column names / features ds = load_dataset("dataset_name") # Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo` ds = ds.prepare_for_task(task="text-classification") # Casting can also be realised during load ds = load_dataset("dataset_name", task="text-classification") # We can also combine shared tasks across dataset concatenation ds1 = load_dataset("dataset_name_1", task="text-classification") ds2 = load_dataset("dataset_name_2", task="text-classification") # If the tasks have the same schema, so will `ds_concat` ds_concat = concatenate_datasets([ds1, ds2]) ``` Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function. As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g. ```python squad = load_dataset("./datasets/squad", split="train") qa = QuestionAnswering() schema = Features({**qa.input_schema, **qa.label_schema}) assert all(item in squad.features.items() for item in schema.items()) ```
true
866,169,312
https://api.github.com/repos/huggingface/datasets/issues/2254
https://github.com/huggingface/datasets/pull/2254
2,254
Update format, fingerprint and indices after add_item
closed
1
2021-04-23T14:31:49
2021-04-27T16:30:49
2021-04-27T16:30:48
lhoestq
[]
Added fingerprint and format update wrappers + update the indices by adding the index of the newly added item in the table.
true
866,034,321
https://api.github.com/repos/huggingface/datasets/issues/2253
https://github.com/huggingface/datasets/pull/2253
2,253
Perform minor refactoring: use config
closed
4
2021-04-23T11:45:47
2021-05-27T09:12:45
2021-04-27T15:02:59
albertvillanova
[ "refactoring" ]
Perform minor refactoring related to `config`.
true
865,870,710
https://api.github.com/repos/huggingface/datasets/issues/2252
https://github.com/huggingface/datasets/issues/2252
2,252
Slow dataloading with big datasets issue persists
closed
70
2021-04-23T08:18:20
2024-01-26T15:10:28
2024-01-26T15:10:28
hwijeen
[]
Hi, I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122). However, the problem seems to persist. Here is the profiled results: 1) Running with 60GB ``` Action | Mean duration (s) |Num calls | Total time (s) | Percentage % | ------------------------------------------------------------------------------------------------------------------------------------ Total | - |_ | 517.96 | 100 % | ------------------------------------------------------------------------------------------------------------------------------------ model_backward | 0.26144 |100 | 26.144 | 5.0475 | model_forward | 0.11123 |100 | 11.123 | 2.1474 | get_train_batch | 0.097121 |100 | 9.7121 | 1.8751 | ``` 3) Running with 600GB, datasets==1.6.0 ``` Action | Mean duration (s) |Num calls | Total time (s) | Percentage % | ------------------------------------------------------------------------------------------------------------------------------------ Total | - |_ | 4563.2 | 100 % | ------------------------------------------------------------------------------------------------------------------------------------ get_train_batch | 5.1279 |100 | 512.79 | 11.237 | model_backward | 4.8394 |100 | 483.94 | 10.605 | model_forward | 0.12162 |100 | 12.162 | 0.26652 | ``` I see that `get_train_batch` lags when data is large. Could this be related to different issues? I would be happy to provide necessary information to investigate.
false
865,848,705
https://api.github.com/repos/huggingface/datasets/issues/2251
https://github.com/huggingface/datasets/issues/2251
2,251
while running run_qa.py, ran into a value error
open
0
2021-04-23T07:51:03
2021-04-23T07:51:03
null
nlee0212
[]
command: python3 run_qa.py --model_name_or_path hyunwoongko/kobart --dataset_name squad_kor_v2 --do_train --do_eval --per_device_train_batch_size 8 --learning_rate 3e-5 --num_train_epochs 3 --max_seq_length 512 --doc_stride 128 --output_dir /tmp/debug_squad/ error: ValueError: External features info don't match the dataset: Got {'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answer': {'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None), 'html_answer_start': Value(dtype='int32', id=None)}, 'url': Value(dtype='string', id=None), 'raw_html': Value(dtype='string', id=None)} with type struct<answer: struct<text: string, answer_start: int32, html_answer_start: int32>, context: string, id: string, question: string, raw_html: string, title: string, url: string> but expected something like {'answer': {'answer_start': Value(dtype='int32', id=None), 'html_answer_start': Value(dtype='int32', id=None), 'text': Value(dtype='string', id=None)}, 'context': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'raw_html': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None)} with type struct<answer: struct<answer_start: int32, html_answer_start: int32, text: string>, context: string, id: string, question: string, raw_html: string, title: string, url: string> I didn't encounter this error 4 hours ago. any solutions for this kind of issue? looks like gained dataset format refers to 'Data Fields', while expected refers to 'Data Instances'.
false
865,402,449
https://api.github.com/repos/huggingface/datasets/issues/2250
https://github.com/huggingface/datasets/issues/2250
2,250
some issue in loading local txt file as Dataset for run_mlm.py
closed
2
2021-04-22T19:39:13
2022-03-30T08:29:47
2022-03-30T08:29:47
alighofrani95
[]
![image](https://user-images.githubusercontent.com/14968123/115773877-18cef300-a3c6-11eb-8e58-a9cbfd1001ec.png) first of all, I tried to load 3 .txt files as a dataset (sure that the directory and permission is OK.), I face with the below error. > FileNotFoundError: [Errno 2] No such file or directory: 'c' by removing one of the training .txt files It's fixed and although if I put all file as training it's ok ![image](https://user-images.githubusercontent.com/14968123/115774207-867b1f00-a3c6-11eb-953b-905cfb112d25.png) ![image](https://user-images.githubusercontent.com/14968123/115774264-9b57b280-a3c6-11eb-9f36-7b109f0e5a31.png) after this, my question is how could I use this defined Dataset for run_mlm.py for from scratch pretraining. by using --train_file path_to_train_file just can use one .txt , .csv or, .json file. I tried to set my defined Dataset as --dataset_name but the below issue occurs. > Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 336, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py", line 291, in cached_path use_auth_token=download_config.use_auth_token, File "/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py", line 621, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/dataset/dataset.py > During handling of the above exception, another exception occurred: > Traceback (most recent call last): File "run_mlm.py", line 486, in <module> main() File "run_mlm.py", line 242, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir) File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 719, in load_dataset use_auth_token=use_auth_token, File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 347, in prepare_module combined_path, github_file_path FileNotFoundError: Couldn't find file locally at dataset/dataset.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.6.0/datasets/dataset/dataset.py. The file is also not present on the master branch on github.
false
865,257,826
https://api.github.com/repos/huggingface/datasets/issues/2249
https://github.com/huggingface/datasets/pull/2249
2,249
Allow downloading/processing/caching only specific splits
open
2
2021-04-22T17:51:44
2022-07-06T15:19:48
null
albertvillanova
[ "enhancement" ]
Allow downloading/processing/caching only specific splits without downloading/processing/caching the other splits. This PR implements two steps to handle only specific splits: - it allows processing/caching only specific splits into Arrow files - for some simple cases, it allows downloading only specific splits (which is more intricate as it depends on the user-defined method `_split_generators`) This PR makes several assumptions: - `DownloadConfig` contains the configuration settings for downloading - the parameter `split` passed to `load_dataset` is just a parameter for loading (from cache), not for downloading
true
864,853,447
https://api.github.com/repos/huggingface/datasets/issues/2248
https://github.com/huggingface/datasets/pull/2248
2,248
Implement Dataset to JSON
closed
0
2021-04-22T11:46:51
2021-04-27T15:29:21
2021-04-27T15:29:20
albertvillanova
[ "enhancement" ]
Implement `Dataset.to_json`.
true
864,817,520
https://api.github.com/repos/huggingface/datasets/issues/2247
https://github.com/huggingface/datasets/pull/2247
2,247
Implement Dataset from Parquet
closed
2
2021-04-22T11:01:38
2021-07-26T13:28:52
2021-07-26T13:28:51
albertvillanova
[ "enhancement" ]
Implement instantiation of Dataset from Parquet file.
true
864,220,031
https://api.github.com/repos/huggingface/datasets/issues/2246
https://github.com/huggingface/datasets/pull/2246
2,246
Faster map w/ input_columns & faster slicing w/ Iterable keys
closed
1
2021-04-21T19:49:07
2021-04-26T16:13:59
2021-04-26T16:13:59
norabelrose
[]
@lhoestq Fixes #2193 - `map` now uses `with_format` to only load needed columns in memory when `input_columns` is set - Slicing datasets with Iterables of indices now uses a new `Table.fast_gather` method, implemented with `np.searchsorted`, to find the appropriate batch indices all at once. `pa.concat_tables` is no longer used for this; we just call `pa.Table.from_batches` with a list of all the batch slices. Together these changes have sped up batched `map()` calls over subsets of columns quite considerably in my initial testing.
true
863,191,655
https://api.github.com/repos/huggingface/datasets/issues/2245
https://github.com/huggingface/datasets/pull/2245
2,245
Add `key` type and duplicates verification with hashing
closed
17
2021-04-20T20:03:19
2021-05-10T18:04:37
2021-05-10T17:31:22
NikhilBartwal
[]
Closes #2230 There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`. This PR is currently a work in progress with the following goals: - [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash - [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing - [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5` - [x] Creating a function giving a custom error message when non-unique keys are found **[This will take care of type-checking for keys]** - [x] Checking for duplicate keys in `writer.write()` for each batch [**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`] @lhoestq Thank you for the feedback. It would be great to have your guidance on this!
true
863,029,946
https://api.github.com/repos/huggingface/datasets/issues/2244
https://github.com/huggingface/datasets/pull/2244
2,244
Set specific cache directories per test function call
open
4
2021-04-20T17:06:22
2022-07-06T15:19:48
null
albertvillanova
[]
Implement specific cache directories (datasets, metrics and modules) per test function call. Currently, the cache directories are set within the temporary test directory, but they are shared across all test function calls. This PR implements specific cache directories for each test function call, so that tests are atomic and there are no side effects.
true
862,909,389
https://api.github.com/repos/huggingface/datasets/issues/2243
https://github.com/huggingface/datasets/issues/2243
2,243
Map is slow and processes batches one after another
closed
5
2021-04-20T14:58:20
2021-05-03T17:54:33
2021-05-03T17:54:32
villmow
[ "bug" ]
## Describe the bug I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't give exact steps to reproduce, I'm sorry. I process a large dataset in a two step process. I first call map on a dataset I load from disk and create a new dataset from it. This works like expected and `map` uses all workers I started it with. Then I process the dataset created by the first step, again with `map`, which is really slow and starting only one or two process at a time. Number of processes is the same for both steps. pseudo code: ```python ds = datasets.load_from_disk("path") new_dataset = ds.map(work, batched=True, ...) # fast uses all processes final_dataset = new_dataset.map(work2, batched=True, ...) # slow starts one process after another ``` ## Expected results Second stage should be as fast as the first stage. ## Versions Paste the output of the following code: - Datasets: 1.5.0 - Python: 3.8.8 (default, Feb 24 2021, 21:46:12) - Platform: Linux-5.4.0-60-generic-x86_64-with-glibc2.10 Do you guys have any idea? Thanks a lot!
false
862,870,205
https://api.github.com/repos/huggingface/datasets/issues/2242
https://github.com/huggingface/datasets/issues/2242
2,242
Link to datasets viwer on Quick Tour page returns "502 Bad Gateway"
closed
1
2021-04-20T14:19:51
2021-04-20T15:02:45
2021-04-20T15:02:45
martavillegas
[ "bug" ]
Link to datasets viwer (https://huggingface.co/datasets/viewer/) on Quick Tour page (https://huggingface.co/docs/datasets/quicktour.html) returns "502 Bad Gateway" The same error with https://huggingface.co/datasets/viewer/?dataset=glue&config=mrpc
false
862,696,460
https://api.github.com/repos/huggingface/datasets/issues/2241
https://github.com/huggingface/datasets/pull/2241
2,241
Add SLR32 to OpenSLR
closed
1
2021-04-20T11:02:45
2021-04-23T16:21:24
2021-04-23T15:36:15
cahya-wirawan
[]
I would like to add SLR32 to OpenSLR. It contains four South African languages: Afrikaans, Sesotho, Setswana and isiXhosa
true
862,537,856
https://api.github.com/repos/huggingface/datasets/issues/2240
https://github.com/huggingface/datasets/pull/2240
2,240
Clarify how to load wikihow
closed
0
2021-04-20T08:02:58
2021-04-21T09:54:57
2021-04-21T09:54:57
albertvillanova
[]
Explain clearer how to load the dataset in the manual download instructions. En relation with #2239.
true
861,904,306
https://api.github.com/repos/huggingface/datasets/issues/2239
https://github.com/huggingface/datasets/issues/2239
2,239
Error loading wikihow dataset
closed
4
2021-04-19T21:02:31
2021-04-20T16:33:11
2021-04-20T16:33:11
odellus
[ "bug" ]
## Describe the bug When attempting to load wikihow into a dataset with ```python from datasets import load_dataset dataset = load_dataset('wikihow', data_dir='./wikihow') ``` I get the message: ``` AttributeError: 'BuilderConfig' object has no attribute 'filename' ``` at the end of a [full stack trace](https://gist.github.com/odellus/602c3b2de52f541d353b1022f320ffc2). ## Steps to reproduce the bug I have followed the instructions for creating a wikihow dataset. The [wikihow dataset site](https://huggingface.co/datasets/wikihow) says to use ```python from datasets import load_dataset dataset = load_dataset('wikihow') ``` to load the dataset. I do so and I get the message ``` AssertionError: The dataset wikihow with config all requires manual data. Please follow the manual download instructions: You need to manually download two wikihow files. An overview of which files to download can be seen at https://github.com/mahnazkoupaee/WikiHow-Dataset. You need to download the following two files manually: 1) https://ucsb.app.box.com/s/ap23l8gafpezf4tq3wapr6u8241zz358 and save the file under <path/to/folder>/wikihowAll.csv 2) https://ucsb.app.box.com/s/7yq601ijl1lzvlfu4rjdbbxforzd2oag and save the file under <path/to/folder>/wikihowSep.csv The <path/to/folder> can e.g. be "~/manual_wikihow_data". Wikihow can then be loaded using the following command `datasets.load_dataset("wikihow", data_dir="<path/to/folder>")`. . Manual data can be loaded with `datasets.load_dataset(wikihow, data_dir='<path/to/manual/data>') ``` So I create a directory `./wikihow` and download `wikihowAll.csv` and `wikihowSep.csv` into the new directory. Then I run ```python from datasets import load_dataset dataset = load_dataset('wikihow', data_dir='./wikihow') ``` that's when I get the [stack trace](https://gist.github.com/odellus/602c3b2de52f541d353b1022f320ffc2) ## Expected results I expected it to load the downloaded files into a dataset. ## Actual results ```python Using custom data configuration default-data_dir=.%2Fwikihow Downloading and preparing dataset wikihow/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/azureuser/.cache/huggingface/datasets/wikihow/default-data_dir=.%2Fwikihow/0.0.0/58f42f8f0e4d459811a0f69aaab35870093830ccd58006769e7e1eb3e0e686c2... --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-9-5e4d40142f30> in <module> ----> 1 dataset = load_dataset('wikihow',data_dir='./wikihow') ~/.local/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs) 745 try_from_hf_gcs=try_from_hf_gcs, 746 base_path=base_path,--> 747 use_auth_token=use_auth_token, 748 ) 749 ~/.local/lib/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 577 if not downloaded_from_gcs: 578 self._download_and_prepare( --> 579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 580 ) 581 # Sync info ~/.local/lib/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 632 split_dict = SplitDict(dataset_name=self.name) 633 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 634 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 635 636 # Checksums verification ~/.cache/huggingface/modules/datasets_modules/datasets/wikihow/58f42f8f0e4d459811a0f69aaab35870093830ccd58006769e7e1eb3e0e686c2/wikihow.py in _split_generators(self, dl_manager) 132 133 path_to_manual_file = os.path.join( --> 134 os.path.abspath(os.path.expanduser(dl_manager.manual_dir)), self.config.filename 135 ) 136 AttributeError: 'BuilderConfig' object has no attribute 'filename' ``` ## Versions Paste the output of the following code: ```python import datasets import sys import platform print(f""" - Datasets: {datasets.__version__} - Python: {sys.version} - Platform: {platform.platform()} """) ``` ``` - Datasets: 1.5.0 - Python: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] - Platform: Linux-5.4.0-1046-azure-x86_64-with-Ubuntu-18.04-bionic ```
false
861,518,291
https://api.github.com/repos/huggingface/datasets/issues/2238
https://github.com/huggingface/datasets/pull/2238
2,238
NLU evaluation data
closed
0
2021-04-19T16:47:20
2021-04-23T15:32:05
2021-04-23T15:32:05
dkajtoch
[]
New intent classification dataset from https://github.com/xliuhw/NLU-Evaluation-Data
true
861,427,439
https://api.github.com/repos/huggingface/datasets/issues/2237
https://github.com/huggingface/datasets/issues/2237
2,237
Update Dataset.dataset_size after transformed with map
open
1
2021-04-19T15:19:38
2021-04-20T14:22:05
null
albertvillanova
[ "enhancement" ]
After loading a dataset, if we transform it by using `.map` its `dataset_size` attirbute is not updated.
false
861,388,145
https://api.github.com/repos/huggingface/datasets/issues/2236
https://github.com/huggingface/datasets/issues/2236
2,236
Request to add StrategyQA dataset
open
0
2021-04-19T14:46:26
2021-04-19T14:46:26
null
sarahwie
[ "dataset request" ]
## Request to add StrategyQA dataset - **Name:** StrategyQA - **Description:** open-domain QA [(project page)](https://allenai.org/data/strategyqa) - **Paper:** [url](https://arxiv.org/pdf/2101.02235.pdf) - **Data:** [here](https://allenai.org/data/strategyqa) - **Motivation:** uniquely-formulated dataset that also includes a question-decomposition breakdown and associated Wikipedia annotations for each step. Good for multi-hop reasoning modeling.
false
861,040,716
https://api.github.com/repos/huggingface/datasets/issues/2235
https://github.com/huggingface/datasets/pull/2235
2,235
Update README.md
closed
0
2021-04-19T08:21:02
2021-04-19T12:49:19
2021-04-19T12:49:19
PierreColombo
[]
Adding relevant citations (paper accepted at AAAI 2020 & EMNLP 2020) to the benchmark
true
860,442,246
https://api.github.com/repos/huggingface/datasets/issues/2234
https://github.com/huggingface/datasets/pull/2234
2,234
Fix bash snippet formatting in ADD_NEW_DATASET.md
closed
0
2021-04-17T16:01:08
2021-04-19T10:57:31
2021-04-19T07:51:36
mariosasko
[]
This PR indents the paragraphs around the bash snippets in ADD_NEW_DATASET.md to fix formatting.
true
860,097,084
https://api.github.com/repos/huggingface/datasets/issues/2233
https://github.com/huggingface/datasets/pull/2233
2,233
Fix `xnli` dataset tuple key
closed
0
2021-04-16T19:12:42
2021-04-19T08:56:42
2021-04-19T08:56:42
NikhilBartwal
[]
Closes #2229 The `xnli` dataset yields a tuple key in case of `ar` which is inconsistant with the acceptable key types (str/int). The key was thus ported to `str` keeping the original information intact.
true
860,075,931
https://api.github.com/repos/huggingface/datasets/issues/2232
https://github.com/huggingface/datasets/pull/2232
2,232
Start filling GLUE dataset card
closed
2
2021-04-16T18:37:37
2021-04-21T09:33:09
2021-04-21T09:33:08
lhoestq
[]
The dataset card was pretty much empty. I added the descriptions (mainly from TFDS since the script is the same), and I also added the tasks tags as well as examples for a subset of the tasks. cc @sgugger
true
859,850,488
https://api.github.com/repos/huggingface/datasets/issues/2231
https://github.com/huggingface/datasets/pull/2231
2,231
Fix map when removing columns on a formatted dataset
closed
0
2021-04-16T14:08:55
2021-04-16T15:10:05
2021-04-16T15:10:04
lhoestq
[]
This should fix issue #2226 The `remove_columns` argument was ignored on formatted datasets
true
859,817,159
https://api.github.com/repos/huggingface/datasets/issues/2230
https://github.com/huggingface/datasets/issues/2230
2,230
Keys yielded while generating dataset are not being checked
closed
9
2021-04-16T13:29:47
2021-05-10T17:31:21
2021-05-10T17:31:21
NikhilBartwal
[ "enhancement" ]
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not. Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196 Even after having a tuple as key, the dataset is generated without any warning. Also, as tested in the case of `anli` dataset (I tweeked the dataset script to use `1` as a key for every example): ``` >>> import datasets >>> nik = datasets.load_dataset('anli') Downloading and preparing dataset anli/plain_text (download: 17.76 MiB, generated: 73.55 MiB, post-processed: Unknown size, total: 91.31 MiB) to C:\Users\nikhil\.cache\huggingface\datasets\anli\plain_text\0.1.0\43fa2c99c10bf8478f1fa0860f7b122c6b277c4c41306255b7641257cf4e3299... 0 examples [00:00, ? examples/s]1 {'uid': '0fd0abfb-659e-4453-b196-c3a64d2d8267', 'premise': 'The Parma trolleybus system (Italian: "Rete filoviaria di Parma" ) forms part of the public transport network of the city and "comune" of Parma, in the region of Emilia-Romagna, northern Italy. In operation since 1953, the system presently comprises four urban routes.', 'hypothesis': 'The trolleybus system has over 2 urban routes', 'label': 'entailment', 'reason': ''} 2021-04-16 12:38:14.483968: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll 1 examples [00:01, 1.87s/ examples]1 {'uid': '7ed72ff4-40b7-4f8a-b1b9-6c612aa62c84', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Sharron Macready was a popular character through the 1980's.", 'label': 'neutral', 'reason': ''} 1 {'uid': '5d2930a3-62ac-485d-94d7-4e36cbbcd7b5', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Bastedo didn't keep any pets because of her views on animal rights.", 'label': 'neutral', 'reason': ''} 1 {'uid': '324db753-ddc9-4a85-a825-f09e2e5aebdd', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Alexandra Bastedo was named by her mother.', 'label': 'neutral', 'reason': ''} 1 {'uid': '4874f429-da0e-406a-90c7-22240ff3ddf8', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Bastedo cared for all the animals that inhabit the earth.', 'label': 'neutral', 'reason': ''} ``` Here also, the dataset was generated successfuly even hough it had same keys without any warning. The reason appears to stem from here: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L988 Here, although it has access to every key, but it is not being checked and the example is written directly: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L992 I would like to take this issue if you allow me. Thank You!
false
859,810,602
https://api.github.com/repos/huggingface/datasets/issues/2229
https://github.com/huggingface/datasets/issues/2229
2,229
`xnli` dataset creating a tuple key while yielding instead of `str` or `int`
closed
2
2021-04-16T13:21:53
2021-04-19T08:56:42
2021-04-19T08:56:42
NikhilBartwal
[]
When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196 Since, community datasets in Tensorflow Datasets also use HF datasets, this causes a Tuple key error while loading HF's `xnli` dataset. I'm up for sending a fix for this, I think we can simply use `file_idx + "_" + row_idx` as a unique key instead of a tuple.
false
859,795,563
https://api.github.com/repos/huggingface/datasets/issues/2228
https://github.com/huggingface/datasets/pull/2228
2,228
[WIP] Add ArrayXD support for fixed size list.
open
1
2021-04-16T13:04:08
2022-07-06T15:19:48
null
jblemoine
[]
Add support for fixed size list for ArrayXD when shape is known . See https://github.com/huggingface/datasets/issues/2146 Since offset are not stored anymore, the file size is now roughly equal to the actual data size.
true
859,771,526
https://api.github.com/repos/huggingface/datasets/issues/2227
https://github.com/huggingface/datasets/pull/2227
2,227
Use update_metadata_with_features decorator in class_encode_column method
closed
0
2021-04-16T12:31:41
2021-04-16T13:49:40
2021-04-16T13:49:39
SBrandeis
[]
Following @mariosasko 's comment
true
859,720,302
https://api.github.com/repos/huggingface/datasets/issues/2226
https://github.com/huggingface/datasets/issues/2226
2,226
Batched map fails when removing all columns
closed
3
2021-04-16T11:17:01
2022-10-05T17:32:15
2022-10-05T17:32:15
villmow
[ "bug" ]
Hi @lhoestq , I'm hijacking this issue, because I'm currently trying to do the approach you recommend: > Currently the optimal setup for single-column computations is probably to do something like > > ```python > result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names) > ``` Here is my code: (see edit, in which I added a simplified version ``` This is the error: ```bash pyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 8964 but got length 1000 ``` I wonder why this error occurs, when I delete every column? Can you give me a hint? ### Edit: I preprocessed my dataset before (using map with the features argument) and saved it to disk. May this be part of the error? I can iterate over the complete dataset and print every sample before calling map. There seems to be no other problem with the dataset. I tried to simplify the code that crashes: ```python # works log.debug(dataset.column_names) log.debug(dataset) for i, sample in enumerate(dataset): log.debug(i, sample) # crashes counted_dataset = dataset.map( lambda x: {"a": list(range(20))}, input_columns=column, remove_columns=dataset.column_names, load_from_cache_file=False, num_proc=num_workers, batched=True, ) ``` ``` pyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 20 but got length 1000 ``` Edit2: May this be a problem with a schema I set when preprocessing the dataset before? I tried to add the `features` argument to the function and then I get a new error: ```python # crashes counted_dataset = dataset.map( lambda x: {"a": list(range(20))}, input_columns=column, remove_columns=dataset.column_names, load_from_cache_file=False, num_proc=num_workers, batched=True, features=datasets.Features( { "a": datasets.Sequence(datasets.Value("int32")) } ) ) ``` ``` File "env/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1704, in _map_single writer.write_batch(batch) File "env/lib/python3.8/site-packages/datasets/arrow_writer.py", line 312, in write_batch col_type = schema.field(col).type if schema is not None else None File "pyarrow/types.pxi", line 1341, in pyarrow.lib.Schema.field KeyError: 'Column tokens does not exist in schema' ``` _Originally posted by @villmow in https://github.com/huggingface/datasets/issues/2193#issuecomment-820230874_
false
858,469,561
https://api.github.com/repos/huggingface/datasets/issues/2225
https://github.com/huggingface/datasets/pull/2225
2,225
fixed one instance of 'train' to 'test'
closed
2
2021-04-15T04:26:40
2021-04-15T22:09:50
2021-04-15T21:19:09
alexwdong
[]
I believe this should be 'test' instead of 'train'
true
857,983,361
https://api.github.com/repos/huggingface/datasets/issues/2224
https://github.com/huggingface/datasets/issues/2224
2,224
Raise error if Windows max path length is not disabled
open
0
2021-04-14T14:57:20
2021-04-14T14:59:13
null
albertvillanova
[]
On startup, raise an error if Windows max path length is not disabled; ask the user to disable it. Linked to discussion in #2220.
false
857,870,800
https://api.github.com/repos/huggingface/datasets/issues/2223
https://github.com/huggingface/datasets/pull/2223
2,223
Set test cache config
closed
5
2021-04-14T12:55:24
2021-04-15T19:11:25
2021-04-15T19:11:25
albertvillanova
[]
Currently, running the tests populates the default cache directory `"~/.cache"`. This PR monkey-patches the config to set the cache directory within the temporary test directory, avoiding side effects.
true
857,847,231
https://api.github.com/repos/huggingface/datasets/issues/2222
https://github.com/huggingface/datasets/pull/2222
2,222
Fix too long WindowsFileLock name
closed
3
2021-04-14T12:26:52
2021-04-14T15:00:25
2021-04-14T14:46:19
albertvillanova
[ "wontfix" ]
Fix WindowsFileLock name longer than allowed MAX_PATH by shortening the basename.
true
857,833,770
https://api.github.com/repos/huggingface/datasets/issues/2221
https://github.com/huggingface/datasets/pull/2221
2,221
Add SLR70 - SLR80 and SLR86 to OpenSLR dataset
closed
0
2021-04-14T12:09:18
2021-04-14T13:50:19
2021-04-14T13:50:19
cahya-wirawan
[]
I would like to add SLR70, SLR71, SLR72, SLR73, SLR74, SLR75, SLR76, SLR77, SLR78, SLR79, SLR80 and SLR86 to OpenSLR dataset. The languages are: Nigerian English, Chilean Spanish, Columbian Spanish, Peruvian Spanish, Puerto Rico Spanish, Venezuelan Spanish, Basque, Galician, Gujarati and Kannada.
true
857,774,626
https://api.github.com/repos/huggingface/datasets/issues/2220
https://github.com/huggingface/datasets/pull/2220
2,220
Fix infinite loop in WindowsFileLock
closed
4
2021-04-14T10:49:58
2021-04-14T14:59:50
2021-04-14T14:59:34
albertvillanova
[ "wontfix" ]
Raise exception to avoid infinite loop.
true
857,321,242
https://api.github.com/repos/huggingface/datasets/issues/2219
https://github.com/huggingface/datasets/pull/2219
2,219
Added CUAD dataset
closed
3
2021-04-13T21:05:03
2021-04-24T14:25:51
2021-04-16T08:50:44
bhavitvyamalik
[]
Dataset link : https://github.com/TheAtticusProject/cuad/ Working on README.md currently. Closes #2084 and [#1](https://github.com/TheAtticusProject/cuad/issues/1).
true
857,238,435
https://api.github.com/repos/huggingface/datasets/issues/2218
https://github.com/huggingface/datasets/issues/2218
2,218
Duplicates in the LAMA dataset
open
3
2021-04-13T18:59:49
2021-04-14T21:42:27
null
amarasovic
[]
I observed duplicates in the LAMA probing dataset, see a minimal code below. ``` >>> import datasets >>> dataset = datasets.load_dataset('lama') No config specified, defaulting to: lama/trex Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c1fe1ec0d6b5eece7bddc) >>> train_dataset = dataset['train'] >>> train_dataset[0] {'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi ʒyl tʁɔʃy]; 12 March 1815 – 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'} >>> train_dataset[1] {'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi ʒyl tʁɔʃy]; 12 March 1815 – 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'} ``` I checked the original data available at https://dl.fbaipublicfiles.com/LAMA/data.zip. This particular duplicated comes from: ``` {"uuid": "40b2ed1c-0961-482e-844e-32596b6117c8", "obj_uri": "Q150", "obj_label": "French", "sub_uri": "Q441235", "sub_label": "Louis Jules Trochu", "predicate_id": "P103", "evidences": [{"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}, {"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}]} ``` What is the best way to deal with these duplicates if I want to use `datasets` to probe with LAMA?
false
857,011,314
https://api.github.com/repos/huggingface/datasets/issues/2217
https://github.com/huggingface/datasets/pull/2217
2,217
Revert breaking change in cache_files property
closed
0
2021-04-13T14:20:04
2021-04-14T14:24:24
2021-04-14T14:24:23
lhoestq
[]
#2025 changed the format of `Dataset.cache_files`. Before it was formatted like ```python [{"filename": "path/to/file.arrow", "start": 0, "end": 1337}] ``` and it was changed to ```python ["path/to/file.arrow"] ``` since there's no start/end offsets available anymore. To make this less breaking, I'm setting the format back to a list of dicts: ```python [{"filename": "path/to/file.arrow"}] ```
true
856,955,534
https://api.github.com/repos/huggingface/datasets/issues/2216
https://github.com/huggingface/datasets/pull/2216
2,216
added real label for glue/mrpc to test set
closed
0
2021-04-13T13:20:20
2021-04-13T13:53:20
2021-04-13T13:53:19
philschmid
[]
Added real label to `glue.py` `mrpc` task for test split.
true
856,716,791
https://api.github.com/repos/huggingface/datasets/issues/2215
https://github.com/huggingface/datasets/pull/2215
2,215
Add datasets SLR35 and SLR36 to OpenSLR
closed
4
2021-04-13T08:24:07
2021-04-13T14:05:14
2021-04-13T14:05:14
cahya-wirawan
[]
I would like to add [SLR35](https://openslr.org/35/) (18GB) and [SLR36](https://openslr.org/36/) (22GB) which are Large Javanese and Sundanese ASR training data set collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada in Indonesia.
true
856,333,657
https://api.github.com/repos/huggingface/datasets/issues/2214
https://github.com/huggingface/datasets/issues/2214
2,214
load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
closed
4
2021-04-12T20:26:01
2021-04-23T15:20:02
2021-04-23T15:20:02
nsaphra
[ "bug" ]
I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package. ```python >>> from datasets import load_metric >>> metric = load_metric("glue", "sst2") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 502, in load_metric File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 66, in import_main_class File "/ext3/miniconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/ns4008/.cache/huggingface/modules/datasets_modules/metrics/glue/e4606ab9804a36bcd5a9cebb2cb65bb14b6ac78ee9e6d5981fa679a495dd55de/glue.py", line 105, in <module> @datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) AttributeError: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' ```
false
856,025,320
https://api.github.com/repos/huggingface/datasets/issues/2213
https://github.com/huggingface/datasets/pull/2213
2,213
Fix lc_quad download checksum
closed
0
2021-04-12T14:16:59
2021-04-14T22:04:54
2021-04-14T13:42:25
mariosasko
[]
Fixes #2211
true
855,999,133
https://api.github.com/repos/huggingface/datasets/issues/2212
https://github.com/huggingface/datasets/issues/2212
2,212
Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset
closed
5
2021-04-12T13:49:56
2023-10-03T16:09:19
2023-10-03T16:09:18
hanss0n
[]
I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running: ```Python fquad = load_dataset("fquad") ``` which produces the following error: ``` Using custom data configuration default Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, post-processed: Unknown size, total: 9.76 MiB) to /root/.cache/huggingface/datasets/fquad/default/0.1.0/778dc2c85813d05ddd0c17087294d5f8f24820752340958070876b677af9f061... --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) <ipython-input-48-a2721797e23b> in <module>() ----> 1 fquad = load_dataset("fquad") 11 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 614 raise FileNotFoundError("Couldn't find file at {}".format(url)) 615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") --> 616 raise ConnectionError("Couldn't reach {}".format(url)) 617 618 # Try a second time ConnectionError: Couldn't reach https://storage.googleapis.com/illuin/fquad/train.json.zip ``` Does anyone know why that is and how to fix it?
false
855,988,410
https://api.github.com/repos/huggingface/datasets/issues/2211
https://github.com/huggingface/datasets/issues/2211
2,211
Getting checksum error when trying to load lc_quad dataset
closed
2
2021-04-12T13:38:58
2021-04-14T13:42:25
2021-04-14T13:42:25
hanss0n
[]
I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running: ```Python lc_quad = load_dataset("lc_quad") ``` which is giving me the following error: ``` Using custom data configuration default Downloading and preparing dataset lc_quad/default (download: 3.69 MiB, generated: 19.77 MiB, post-processed: Unknown size, total: 23.46 MiB) to /root/.cache/huggingface/datasets/lc_quad/default/2.0.0/5a98fe174603f5dec6df07edf1c2b4d2317210d2ad61f5a393839bca4d64e5a7... --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-42-404ace83f73c> in <module>() ----> 1 lc_quad = load_dataset("lc_quad") 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 37 if len(bad_urls) > 0: 38 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 40 logger.info("All the checksums matched successfully" + for_verification_name) 41 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/AskNowQA/LC-QuAD2.0/archive/master.zip'] ``` Does anyone know why this could be and how I fix it?
false
855,709,400
https://api.github.com/repos/huggingface/datasets/issues/2210
https://github.com/huggingface/datasets/issues/2210
2,210
dataloading slow when using HUGE dataset
closed
2
2021-04-12T08:33:02
2021-04-13T02:03:05
2021-04-13T02:03:05
hwijeen
[]
Hi, When I use datasets with 600GB data, the dataloading speed increases significantly. I am experimenting with two datasets, and one is about 60GB and the other 600GB. Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training. When looking at the pytorch-lightning supported profile of two different runs, I see that fetching a batch(`get_train_batch`) consumes an unreasonable amount of time when data is large. What could be the cause? * 60GB data ``` Action | Mean duration (s) |Num calls | Total time (s) | Percentage % | ------------------------------------------------------------------------------------------------------------------------------------ Total | - |_ | 200.33 | 100 % | ------------------------------------------------------------------------------------------------------------------------------------ run_training_epoch | 71.994 |1 | 71.994 | 35.937 | run_training_batch | 0.64373 |100 | 64.373 | 32.133 | optimizer_step_and_closure_0 | 0.64322 |100 | 64.322 | 32.108 | training_step_and_backward | 0.61004 |100 | 61.004 | 30.452 | model_backward | 0.37552 |100 | 37.552 | 18.745 | model_forward | 0.22813 |100 | 22.813 | 11.387 | training_step | 0.22759 |100 | 22.759 | 11.361 | get_train_batch | 0.066385 |100 | 6.6385 | 3.3138 | ``` * 600GB data ``` Action | Mean duration (s) |Num calls | Total time (s) | Percentage % | ------------------------------------------------------------------------------------------------------------------------------------ Total | - |_ | 3285.6 | 100 % | ------------------------------------------------------------------------------------------------------------------------------------ run_training_epoch | 1397.9 |1 | 1397.9 | 42.546 | run_training_batch | 7.2596 |100 | 725.96 | 22.095 | optimizer_step_and_closure_0 | 7.2589 |100 | 725.89 | 22.093 | training_step_and_backward | 7.223 |100 | 722.3 | 21.984 | model_backward | 6.9662 |100 | 696.62 | 21.202 | get_train_batch | 6.322 |100 | 632.2 | 19.241 | model_forward | 0.24902 |100 | 24.902 | 0.75789 | training_step | 0.2485 |100 | 24.85 | 0.75633 | ```
false
855,638,232
https://api.github.com/repos/huggingface/datasets/issues/2209
https://github.com/huggingface/datasets/pull/2209
2,209
Add code of conduct to the project
closed
0
2021-04-12T07:16:14
2021-04-12T17:55:52
2021-04-12T17:55:52
albertvillanova
[ "documentation" ]
Add code of conduct to the project and link it from README and CONTRIBUTING. This was already done in `transformers`.
true
855,343,835
https://api.github.com/repos/huggingface/datasets/issues/2208
https://github.com/huggingface/datasets/pull/2208
2,208
Remove Python2 leftovers
closed
1
2021-04-11T16:08:03
2021-04-14T22:05:36
2021-04-14T13:40:51
mariosasko
[]
This PR removes Python2 leftovers since this project aims for Python3.6+ (and as of 2020 Python2 is no longer officially supported)
true
855,267,383
https://api.github.com/repos/huggingface/datasets/issues/2207
https://github.com/huggingface/datasets/issues/2207
2,207
making labels consistent across the datasets
closed
2
2021-04-11T10:03:56
2022-06-01T16:23:08
2022-06-01T16:21:10
dorost1234
[]
Hi For accessing the labels one can type ``` >>> a.features['label'] ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None) ``` The labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if one try to access as above they are entailment, neutral,contradiction, it would be great to have the labels consistent. thanks
false
855,252,415
https://api.github.com/repos/huggingface/datasets/issues/2206
https://github.com/huggingface/datasets/issues/2206
2,206
Got pyarrow error when loading a dataset while adding special tokens into the tokenizer
closed
7
2021-04-11T08:40:09
2021-11-10T12:18:30
2021-11-10T12:04:28
yana-xuyan
[ "bug" ]
I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below: Traceback (most recent call last): File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_single writer.write(example) File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 296, in write self.write_on_file() File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 270, in write_on_file pa_array = pa.array(typed_sequence) File "pyarrow/array.pxi", line 222, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 108, in __arrow_array__ out = out.cast(pa.list_(self.optimized_int_type)) File "pyarrow/array.pxi", line 810, in pyarrow.lib.Array.cast File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/pyarrow/compute.py", line 281, in cast return call_function("cast", [arr], options) File "pyarrow/_compute.pyx", line 465, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 294, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Integer value 50259 not in range: -128 to 127 Do you have any idea about it?
false
855,207,605
https://api.github.com/repos/huggingface/datasets/issues/2205
https://github.com/huggingface/datasets/pull/2205
2,205
Updating citation information on LinCE readme
closed
0
2021-04-11T03:18:05
2021-04-12T17:53:34
2021-04-12T17:53:34
gaguilar
[]
Hi! I just updated the citation information in this PR. It had an additional bibtex from one of the datasets used in LinCE and then the LinCE bibtex. I removed the former and added a link that shows the full list of citations for each dataset. Thanks!
true
855,144,431
https://api.github.com/repos/huggingface/datasets/issues/2204
https://github.com/huggingface/datasets/pull/2204
2,204
Add configurable options to `seqeval` metric
closed
0
2021-04-10T19:58:19
2021-04-15T13:49:46
2021-04-15T13:49:46
marrodion
[]
Fixes #2148 Adds options to use strict mode, different schemes of evaluation, sample weight and adjust zero_division behavior, if encountered. `seqeval` provides schemes as objects, hence dynamic import from string, to avoid making the user do the import (thanks to @albertvillanova for the `importlib` idea).
true
855,053,595
https://api.github.com/repos/huggingface/datasets/issues/2203
https://github.com/huggingface/datasets/pull/2203
2,203
updated banking77 train and test data
closed
2
2021-04-10T12:10:10
2021-04-23T14:33:39
2021-04-23T14:33:39
hsali
[]
true
854,501,109
https://api.github.com/repos/huggingface/datasets/issues/2202
https://github.com/huggingface/datasets/pull/2202
2,202
Add classes GenerateMode, DownloadConfig and Version to the documentation
closed
0
2021-04-09T12:58:19
2021-04-12T17:58:00
2021-04-12T17:57:59
albertvillanova
[]
Add documentation for classes `GenerateMode`, `DownloadConfig` and `Version`. Update the docstring of `load_dataset` to create cross-reference links to the classes. Related to #2187.
true
854,499,563
https://api.github.com/repos/huggingface/datasets/issues/2201
https://github.com/huggingface/datasets/pull/2201
2,201
Fix ArrowWriter overwriting features in ArrowBasedBuilder
closed
0
2021-04-09T12:56:19
2021-04-12T13:32:17
2021-04-12T13:32:16
lhoestq
[]
This should fix the issues with CSV loading experienced in #2153 and #2200. The CSV builder is an ArrowBasedBuilder that had an issue with its ArrowWriter used to write the arrow file from the csv data. The writer wasn't initialized with the features passed by the user. Therefore the writer was inferring the features from the arrow data, discarding the features passed by the user. I fixed that and I updated the tests
true
854,449,656
https://api.github.com/repos/huggingface/datasets/issues/2200
https://github.com/huggingface/datasets/issues/2200
2,200
_prepare_split will overwrite DatasetBuilder.info.features
closed
2
2021-04-09T11:47:13
2021-06-04T10:37:35
2021-06-04T10:37:35
Gforky
[]
Hi, here is my issue: I initialized a Csv datasetbuilder with specific features: ``` def get_dataset_features(data_args): features = {} if data_args.text_features: features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")}) if data_args.num_features: features.update({text_feature: hf_features.Value("float32") for text_feature in data_args.num_features.strip().split(",")}) if data_args.label_classes: features["label"] = hf_features.ClassLabel(names=data_args.label_classes.strip().split(",")) else: features["label"] = hf_features.Value("float32") return hf_features.Features(features) datasets = load_dataset(extension, data_files=data_files, sep=data_args.delimiter, header=data_args.header, column_names=data_args.column_names.split(",") if data_args.column_names else None, features=get_dataset_features(data_args=data_args)) ``` The `features` is printout as below before `builder_instance.as_dataset` is called: ``` {'label': ClassLabel(num_classes=2, names=['unacceptable', 'acceptable'], names_file=None, id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)} ```` But after the `builder_instance.as_dataset` is called for Csv dataset builder, the `features` is changed to: ``` {'label': Value(dtype='int64', id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)} ``` After digged into the code, I releazed that in `ArrowBasedBuilder._prepare_split`, the DatasetBuilder's info's features will be overwrited by `ArrowWriter`'s `_features`. But `ArrowWriter` is initailized without passing `features`. So my concern is: It's this overwrite must be done, or, should it be an option to pass features in `_prepare_split` function?
false
854,417,318
https://api.github.com/repos/huggingface/datasets/issues/2199
https://github.com/huggingface/datasets/pull/2199
2,199
Fix backward compatibility in Dataset.load_from_disk
closed
3
2021-04-09T11:01:10
2021-04-09T15:57:05
2021-04-09T15:57:05
albertvillanova
[]
Fix backward compatibility when loading from disk an old dataset saved to disk with indices using key "_indices_data_files". Related to #2195.
true
854,357,481
https://api.github.com/repos/huggingface/datasets/issues/2198
https://github.com/huggingface/datasets/pull/2198
2,198
added file_permission in load_dataset
closed
1
2021-04-09T09:39:06
2021-04-16T14:11:46
2021-04-16T14:11:46
bhavitvyamalik
[]
As discussed in #2065 I've added `file_permission` argument in `load_dataset`. Added mainly 2 things here: 1) Permission of downloaded datasets when converted to .arrow files can be changed with argument `file_permission` argument in `load_dataset` (default is 0o644 only) 2) Incase the user uses `map` later on to generate another cache file of dataset, it ensures the permissions of newly generated file are similar to that of` *-train.arrow` file inside cache_dir for that dataset.
true
854,356,559
https://api.github.com/repos/huggingface/datasets/issues/2197
https://github.com/huggingface/datasets/pull/2197
2,197
fix missing indices_files in load_form_disk
closed
0
2021-04-09T09:37:57
2021-04-09T09:54:40
2021-04-09T09:54:39
lhoestq
[]
This should fix #2195 `load_from_disk` was failing if there was no "_indices_files" field in state.json. This can happen if the dataset has no indices mapping
true
854,126,114
https://api.github.com/repos/huggingface/datasets/issues/2196
https://github.com/huggingface/datasets/issues/2196
2,196
`load_dataset` caches two arrow files?
closed
3
2021-04-09T03:49:19
2021-04-12T05:25:29
2021-04-12T05:25:29
hwijeen
[ "question" ]
Hi, I am using datasets to load large json file of 587G. I checked the cached folder and found that there are two arrow files created: * `cache-ed205e500a7dc44c.arrow` - 355G * `json-train.arrow` - 582G Why is the first file created? If I delete it, would I still be able to `load_from_disk`?
false
854,070,194
https://api.github.com/repos/huggingface/datasets/issues/2195
https://github.com/huggingface/datasets/issues/2195
2,195
KeyError: '_indices_files' in `arrow_dataset.py`
closed
2
2021-04-09T01:37:12
2021-04-09T09:55:09
2021-04-09T09:54:39
samsontmr
[ "bug" ]
After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset. Trace: ``` Traceback (most recent call last): File "load_data.py", line 11, in <module> dataset = load_from_disk(SRC) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line 784, in load_from_disk return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 692, in load_from_disk dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 634, in load_from_disk if state["_indices_files"]: KeyError: '_indices_files' ``` I believe this is the line causing the error since there may not be a `_indices_files` key in the older versions: https://github.com/huggingface/datasets/blob/b70141e3c5149430951773aaa0155555c5fb3e76/src/datasets/arrow_dataset.py#L634 May I suggest using `state.get()` instead of directly indexing the dictionary? @lhoestq
false
853,909,452
https://api.github.com/repos/huggingface/datasets/issues/2194
https://github.com/huggingface/datasets/issues/2194
2,194
py3.7: TypeError: can't pickle _LazyModule objects
closed
1
2021-04-08T21:02:48
2021-04-09T16:56:50
2021-04-09T01:52:57
stas00
[]
While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install: ``` git clone https://github.com/huggingface/transformers cd transformers pip install -e .[testing] export BS=1; rm -rf /tmp/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \ examples/language-modeling/run_clm.py --model_name_or_path distilgpt2 --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 --do_train --max_train_samples 1 \ --per_device_train_batch_size $BS --output_dir /tmp/test-clm --block_size 128 --logging_steps 1 \ --fp16 ``` ``` Traceback (most recent call last): File "examples/language-modeling/run_clm.py", line 453, in <module> main() File "examples/language-modeling/run_clm.py", line 336, in main load_from_cache_file=not data_args.overwrite_cache, File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in map for k, dataset in self.items() File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp> for k, dataset in self.items() File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1259, in map update_data=update_data, File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 157, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 158, in wrapper self._fingerprint, transform, kwargs_for_fingerprint File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint hasher.update(transform_args[key]) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 57, in update self.m.update(self.hash(value).encode("utf-8")) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 53, in hash return cls.hash_default(value) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 46, in hash_default return cls.hash_bytes(dumps(value)) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 389, in dumps dump(obj, file) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 361, in dump Pickler(file, recurse=True).dump(obj) File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump StockPickler.dump(self, obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 437, in dump self.save(obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 556, in save_function obj=obj, File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 524, in save rv = reduce(self.proto) TypeError: can't pickle _LazyModule objects ``` ``` $ python --version Python 3.7.4 $ python -m torch.utils.collect_env Collecting environment information... PyTorch version: 1.8.0.dev20210110+cu110 Is debug build: False CUDA used to build PyTorch: 11.0 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.2 LTS (x86_64) GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.16.3 ``` Thanks.
false
853,725,707
https://api.github.com/repos/huggingface/datasets/issues/2193
https://github.com/huggingface/datasets/issues/2193
2,193
Filtering/mapping on one column is very slow
closed
12
2021-04-08T18:16:14
2021-04-26T16:13:59
2021-04-26T16:13:59
norabelrose
[ "question" ]
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation. I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_columns=['num_tokens']`, it seems that the entirety of each row is loaded into memory, which makes the operation take much longer than it should. Indeed, `filter` currently just calls `map`, and I found that in `_map_single` on lines 1690-1704 of `arrow_dataset.py`, the method is just grabbing slices of _all the rows_ of the dataset and then passing only the specified columns to the map function. It seems that, when the user passes a value for `input_columns`, the `map` function should create a temporary pyarrow table by selecting just those columns, and then get slices from that table. Or something like that— I'm not very familiar with the pyarrow API. I know that in the meantime I can sort of get around this by simply only returning the rows that match my filter criterion from the tokenizing function I pass to `map()`, but I actually _also_ want to map on just the `num_tokens` column in order to compute batches with a roughly uniform number of tokens per batch. I would also ideally like to be able to change my minimum and maximum article lengths without having to re-tokenize the entire dataset. PS: This is definitely not a "dataset request." I'm realizing that I don't actually know how to remove labels from my own issues on other people's repos, if that is even possible.
false
853,547,910
https://api.github.com/repos/huggingface/datasets/issues/2192
https://github.com/huggingface/datasets/pull/2192
2,192
Fix typo in huggingface hub
closed
0
2021-04-08T14:42:24
2021-04-08T15:47:41
2021-04-08T15:47:40
LysandreJik
[]
pip knows how to resolve to `huggingface_hub`, but conda doesn't! The `packaging` dependency is also required for the build to complete.
true
853,364,204
https://api.github.com/repos/huggingface/datasets/issues/2191
https://github.com/huggingface/datasets/pull/2191
2,191
Refactorize tests to use Dataset as context manager
closed
4
2021-04-08T11:21:04
2021-04-19T07:53:11
2021-04-19T07:53:10
albertvillanova
[ "refactoring" ]
Refactorize Dataset tests to use Dataset as context manager.
true
853,181,564
https://api.github.com/repos/huggingface/datasets/issues/2190
https://github.com/huggingface/datasets/issues/2190
2,190
News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs
closed
2
2021-04-08T07:53:43
2021-05-24T10:03:55
2021-05-24T10:03:55
anassalamah
[]
I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi. ``` train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]') val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]') # filtering out examples that are not ar-en translations but ar-hi val_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True) ``` * I'm fairly new to using datasets so I might be doing something wrong
false
853,052,891
https://api.github.com/repos/huggingface/datasets/issues/2189
https://github.com/huggingface/datasets/issues/2189
2,189
save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object.
closed
1
2021-04-08T04:42:53
2022-06-01T16:32:15
2022-06-01T16:32:15
shamanez
[]
As you can see, it saves the entire dataset. @lhoestq You can check by going through the following example, ``` from datasets import load_from_disk,concatenate_datasets loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset') n=20 kb_list=[loaded_data.shard(n, i, contiguous=True) for i in range(n)] final_dataset=concatenate_datasets([kb_list[1],kb_list[2]]) final_dataset.save_to_disk('/home/gsir059/haha/k.arrow') ```
false
853,044,166
https://api.github.com/repos/huggingface/datasets/issues/2188
https://github.com/huggingface/datasets/issues/2188
2,188
Duplicate data in Timit dataset
closed
2
2021-04-08T04:21:54
2021-04-08T12:13:19
2021-04-08T12:13:19
thanh-p
[]
I ran a simple code to list all texts in Timit dataset and the texts were all the same. Is this dataset corrupted? **Code:** timit = load_dataset("timit_asr") print(*timit['train']['text'], sep='\n') **Result:** Would such an act of refusal be useful? Would such an act of refusal be useful? Would such an act of refusal be useful? Would such an act of refusal be useful? ... ... Would such an act of refusal be useful?
false
852,939,736
https://api.github.com/repos/huggingface/datasets/issues/2187
https://github.com/huggingface/datasets/issues/2187
2,187
Question (potential issue?) related to datasets caching
open
15
2021-04-08T00:16:28
2023-01-03T18:30:38
null
ioana-blue
[ "question" ]
I thought I had disabled datasets caching in my code, as follows: ``` from datasets import set_caching_enabled ... def main(): # disable caching in datasets set_caching_enabled(False) ``` However, in my log files I see messages like the following: ``` 04/07/2021 18:34:42 - WARNING - datasets.builder - Using custom data configuration default-888a87931cbc5877 04/07/2021 18:34:42 - WARNING - datasets.builder - Reusing dataset csv (xxxx/cache-transformers/datasets/csv/default-888a87931cbc5877/0.0.0/965b6429be0fc05f975b608ce64e1fa941cc8fb4f30629b523d2390f3c0e1a93 ``` Can you please let me know what this reusing dataset csv means? I wouldn't expect any reusing with the datasets caching disabled. Thank you!
false
852,840,819
https://api.github.com/repos/huggingface/datasets/issues/2186
https://github.com/huggingface/datasets/pull/2186
2,186
GEM: new challenge sets
closed
1
2021-04-07T21:39:07
2021-04-07T21:56:35
2021-04-07T21:56:35
yjernite
[]
This PR updates the GEM dataset to: - remove extraneous fields in WikiAuto after https://github.com/huggingface/datasets/pull/2171 fixed the source - add context and services to Schema Guided Dialog - Add new or update challenge sets for MLSUM ES and DE, XSUM, and SGD
true
852,684,395
https://api.github.com/repos/huggingface/datasets/issues/2185
https://github.com/huggingface/datasets/issues/2185
2,185
.map() and distributed training
closed
8
2021-04-07T18:22:14
2021-10-23T07:11:15
2021-04-09T15:38:31
VictorSanh
[]
Hi, I have a question regarding distributed training and the `.map` call on a dataset. I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`. `dataset` is then tokenized: ```python datasets = load_from_disk(dataset_path=my_path) [...] def tokenize_function(examples): return tokenizer(examples[text_column_name]) logger.info("Mapping dataset to tokenized dataset.") tokenized_datasets = datasets.map( tokenize_function, batched=True, num_proc=preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=True, ) ``` I am using 31 workers (`preprocessing_num_workers=31`) and thus it creates 31 `cache*.arrow` files in `my_path/train` (there is only a train split). When I relaunch the script, the map is tokenization is skipped in favor of loading the 31 previously cached files, and that's perfect. Everything so far was done by launching a **single process script**. I now launch the same training script in **distributed mode** (`pytorch -m torch.distributed.launch --nproc_per_node 2`). However, once it reaches the map call, it re-does the tokenization... instead of loading the 31 cached files. I tried adding the `cache_file_name` argument: `cache_file_name={"train": my_path/one_of_the_arrow_file}`, but I can't give the 31 cached files, so it probably isn't the right way to do it. **My question: what is the best way to load cached files if they were pre-processed and dumped in multiple arrow files?** It seems automatically handled for single processes but fails on distributed training. - I am following the same structure as the examples of transformers (more specifically [run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) in my case) - I am using 1.5.0 version of datasets if that matters.
false
852,597,258
https://api.github.com/repos/huggingface/datasets/issues/2184
https://github.com/huggingface/datasets/pull/2184
2,184
Implementation of class_encode_column
closed
1
2021-04-07T16:47:43
2021-04-16T11:44:37
2021-04-16T11:26:59
SBrandeis
[]
Addresses #2176 I'm happy to discuss the API and internals!
true
852,518,411
https://api.github.com/repos/huggingface/datasets/issues/2183
https://github.com/huggingface/datasets/pull/2183
2,183
Fix s3fs tests for py36 and py37+
closed
0
2021-04-07T15:17:11
2021-04-08T08:54:45
2021-04-08T08:54:44
lhoestq
[]
Recently several changes happened: 1. latest versions of `fsspec` require python>3.7 for async features 2. `s3fs` added a dependency on `aiobotocore`, which is not compatible with the `moto` s3 mock context manager This PR fixes both issues, by pinning `fsspec` and `s3fs` for python 3.6, and by using `moto` in server mode to support running the tests on python>=3.7 with the latest version of `fsspec` and `s3fs`. cc @philschmid
true
852,384,872
https://api.github.com/repos/huggingface/datasets/issues/2182
https://github.com/huggingface/datasets/pull/2182
2,182
Set default in-memory value depending on the dataset size
closed
4
2021-04-07T13:00:18
2021-04-20T14:20:12
2021-04-20T10:04:04
albertvillanova
[ "enhancement" ]
Set a default value for `in_memory` depending on the size of the dataset to be loaded. Close #2179. TODO: - [x] Add a section in the docs about this. - ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~
true
852,261,607
https://api.github.com/repos/huggingface/datasets/issues/2181
https://github.com/huggingface/datasets/issues/2181
2,181
Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
closed
9
2021-04-07T10:26:46
2021-04-12T07:15:55
2021-04-12T07:15:55
hwijeen
[]
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project. When loading a huge json file of 500GB, pyarrow complains as follows: ``` Traceback (most recent call last): File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 531, in incomplete_dir yield tmp_dir File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 650, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 1027, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose): File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__ for obj in iterable: File "/app/.cache/huggingface/modules/datasets_modules/datasets/json/9498524fd296a6cca99c66d6c5be507d1c0991f5a814e535b507f4a66096a641/json.py", line 83, in _generate_tables parse_options=self.config.pa_parse_options, File "pyarrow/_json.pyx", line 247, in pyarrow._json.read_json File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ``` When using only a small portion of the sample file, say first 100 lines, it works perfectly well.. I see that it is the error from pyarrow, but could you give me a hint or possible solutions? #369 describes the same error and #372 claims to have fixed the issue, but I have no clue why I am still getting this one. Thanks in advance!
false
852,258,635
https://api.github.com/repos/huggingface/datasets/issues/2180
https://github.com/huggingface/datasets/pull/2180
2,180
Add tel to xtreme tatoeba
closed
0
2021-04-07T10:23:15
2021-04-07T15:50:35
2021-04-07T15:50:34
lhoestq
[]
This should fix issue #2149
true
852,237,957
https://api.github.com/repos/huggingface/datasets/issues/2179
https://github.com/huggingface/datasets/issues/2179
2,179
Load small datasets in-memory instead of using memory map
closed
0
2021-04-07T09:58:16
2021-04-20T10:04:04
2021-04-20T10:04:03
lhoestq
[ "enhancement", "generic discussion" ]
Currently all datasets are loaded using memory mapping by default in `load_dataset`. However this might not be necessary for small datasets. If a dataset is small enough, then it can be loaded in-memory and: - its memory footprint would be small so it's ok - in-memory computations/queries would be faster - the caching on-disk would be disabled, making computations even faster (no I/O bound because of the disk) - but running the same computation a second time would recompute everything since there would be no cached results on-disk. But this is probably fine since computations would be fast anyway + users should be able to provide a cache filename if needed. Therefore, maybe the default behavior of `load_dataset` should be to load small datasets in-memory and big datasets using memory mapping.
false
852,215,058
https://api.github.com/repos/huggingface/datasets/issues/2178
https://github.com/huggingface/datasets/pull/2178
2,178
Fix cast memory usage by using map on subtables
closed
3
2021-04-07T09:30:50
2021-04-20T14:20:44
2021-04-13T09:28:16
lhoestq
[ "enhancement" ]
The `cast` operation on a pyarrow Table may create new arrays in memory. This is an issue since users expect memory mapped datasets to not fill up the RAM. To fix that I used `map` to write a new arrow file on disk when cast is used. To make things more convenient I introduced the `arrow` formatting of a dataset, to make it return pyarrow tables instead of python dicts. This way one can use pyarrow transforms directly when using `map`. edit: we'll use the same mechanism for `filter`
true
852,065,307
https://api.github.com/repos/huggingface/datasets/issues/2177
https://github.com/huggingface/datasets/pull/2177
2,177
add social thumbnial
closed
0
2021-04-07T06:40:06
2021-04-07T08:16:01
2021-04-07T08:16:01
philschmid
[]
# What does this PR do? I added OpenGraph/ Twitter Card support to the docs to create nice social thumbnails. ![Bildschirmfoto 2021-04-07 um 08 36 50](https://user-images.githubusercontent.com/32632186/113821698-bac2ce80-977c-11eb-81aa-d8f16355857e.png) To be able to add these I needed to install `sphinxext-opengraph`. I came across this [issue](https://github.com/readthedocs/readthedocs.org/issues/1758) on the readthedocs repo saying that since someone has built this plugin they are not integrating and providing documentation to it. That's why I added it for creating the documentation. The repository can be found [here](https://github.com/wpilibsuite/sphinxext-opengraph/tree/main). P.S. It seemed that `make style` never ran for `docs/` i hope the changes are okay otherwise I'll revert it.
true
851,865,795
https://api.github.com/repos/huggingface/datasets/issues/2176
https://github.com/huggingface/datasets/issues/2176
2,176
Converting a Value to a ClassLabel
closed
2
2021-04-06T22:54:16
2022-06-01T16:31:49
2022-06-01T16:31:49
nelson-liu
[ "enhancement" ]
Hi! In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.` Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks!
false
851,836,096
https://api.github.com/repos/huggingface/datasets/issues/2175
https://github.com/huggingface/datasets/issues/2175
2,175
dataset.search_batch() function outputs all -1 indices sometime.
closed
6
2021-04-06T21:50:49
2021-04-16T12:21:16
2021-04-16T12:21:15
shamanez
[]
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**. During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_rag.py#L231) an error issue when all retrieved indices are -1. Please refer to the screenshot of a PID worker. ![image](https://user-images.githubusercontent.com/16892570/113782387-37a67600-9786-11eb-9c29-acad661a9648.png) Here, my retrieve batch size is 2 and n_docs is 5. I can solve this by working around np. stack, but I want to ask, why we get an output index of -1. Do you have any idea :) ? Is this a problem of the index, where the faiss can't find any similar vector? Is there documentation on the output index being -1? @lhoestq
false
851,383,675
https://api.github.com/repos/huggingface/datasets/issues/2174
https://github.com/huggingface/datasets/pull/2174
2,174
Pin docutils for better doc
closed
0
2021-04-06T12:40:20
2021-04-06T12:55:53
2021-04-06T12:55:53
sgugger
[]
The latest release of docutils make the navbar in the documentation weird and the Markdown wrongly interpreted: ![image](https://user-images.githubusercontent.com/35901082/113711773-5be55280-96b3-11eb-9b3b-9794f17709aa.png) We had the same problem in Transformers and solved it by pinning docutils (a dep of sphinx). You can see the version after the change [here](https://32769-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html).
true
851,359,284
https://api.github.com/repos/huggingface/datasets/issues/2173
https://github.com/huggingface/datasets/pull/2173
2,173
Add OpenSLR dataset
closed
0
2021-04-06T12:08:34
2021-04-12T16:54:46
2021-04-12T16:54:46
cahya-wirawan
[]
OpenSLR (https://openslr.org/) is a site devoted to hosting speech and language resources, such as training corpora for speech recognition, and software related to speech recognition. There are around 80 speech datasets listed in OpenSLR, currently this PR includes only 9 speech datasets SLR41, SLR42, SLR43, SLR44, SLR63, SLR64, SLR65, SLR66 and SLR69 (Javanese, Khmer, Nepali and Sundanese, Malayalam, Marathi, Tamil, Telugu and Catalan). I can add other speech datasets gradually next time.
true