id
int64 599M
3.29B
| url
stringlengths 58
61
| html_url
stringlengths 46
51
| number
int64 1
7.72k
| title
stringlengths 1
290
| state
stringclasses 2
values | comments
int64 0
70
| created_at
timestamp[s]date 2020-04-14 10:18:02
2025-08-05 09:28:51
| updated_at
timestamp[s]date 2020-04-27 16:04:17
2025-08-05 11:39:56
| closed_at
timestamp[s]date 2020-04-14 12:01:40
2025-08-01 05:15:45
⌀ | user_login
stringlengths 3
26
| labels
listlengths 0
4
| body
stringlengths 0
228k
⌀ | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
919,099,218
|
https://api.github.com/repos/huggingface/datasets/issues/2485
|
https://github.com/huggingface/datasets/issues/2485
| 2,485
|
Implement layered building
|
open
| 0
| 2021-06-11T18:54:25
| 2021-06-11T18:54:25
| null |
albertvillanova
|
[
"enhancement"
] |
As discussed with @stas00 and @lhoestq (see also here https://github.com/huggingface/datasets/issues/2481#issuecomment-859712190):
> My suggestion for this would be to have this enabled by default.
>
> Plus I don't know if there should be a dedicated issue to that is another functionality. But I propose layered building rather than all at once. That is:
>
> 1. uncompress a handful of files via a generator enough to generate one arrow file
> 2. process arrow file 1
> 3. delete all the files that went in and aren't needed anymore.
>
> rinse and repeat.
>
> 1. This way much less disc space will be required - e.g. on JZ we won't be running into inode limitation, also it'd help with the collaborative hub training project
> 2. The user doesn't need to go and manually clean up all the huge files that were left after pre-processing
> 3. It would already include deleting temp files this issue is talking about
>
> I wonder if the new streaming API would be of help, except here the streaming would be into arrow files as the destination, rather than dataloaders.
| false
|
919,092,635
|
https://api.github.com/repos/huggingface/datasets/issues/2484
|
https://github.com/huggingface/datasets/issues/2484
| 2,484
|
Implement loading a dataset builder
|
closed
| 1
| 2021-06-11T18:47:22
| 2021-07-05T10:45:57
| 2021-07-05T10:45:57
|
albertvillanova
|
[
"enhancement"
] |
As discussed with @stas00 and @lhoestq, this would allow things like:
```python
from datasets import load_dataset_builder
dataset_name = "openwebtext"
builder = load_dataset_builder(dataset_name)
print(builder.cache_dir)
```
| false
|
918,871,712
|
https://api.github.com/repos/huggingface/datasets/issues/2483
|
https://github.com/huggingface/datasets/pull/2483
| 2,483
|
Use gc.collect only when needed to avoid slow downs
|
closed
| 2
| 2021-06-11T15:09:30
| 2021-06-18T19:25:06
| 2021-06-11T15:31:36
|
lhoestq
|
[] |
In https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 we added a call to gc.collect to resolve some issues on windows (see https://github.com/huggingface/datasets/pull/2482)
However calling gc.collect too often causes significant slow downs (the CI run time doubled).
So I just moved the gc.collect call to the exact place where it's actually needed: when post-processing a dataset
| true
|
918,846,027
|
https://api.github.com/repos/huggingface/datasets/issues/2482
|
https://github.com/huggingface/datasets/pull/2482
| 2,482
|
Allow to use tqdm>=4.50.0
|
closed
| 0
| 2021-06-11T14:49:21
| 2021-06-11T15:11:51
| 2021-06-11T15:11:50
|
lhoestq
|
[] |
We used to have permission errors on windows whith the latest versions of tqdm (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/6365/workflows/24f7c960-3176-43a5-9652-7830a23a981e/jobs/39232))
They were due to open arrow files not properly closed by pyarrow.
Since https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 gc.collect is called each time we don't need an arrow file to make sure that the files are closed.
close https://github.com/huggingface/datasets/issues/2471
cc @lewtun
| true
|
918,680,168
|
https://api.github.com/repos/huggingface/datasets/issues/2481
|
https://github.com/huggingface/datasets/issues/2481
| 2,481
|
Delete extracted files to save disk space
|
closed
| 1
| 2021-06-11T12:21:52
| 2021-07-19T09:08:18
| 2021-07-19T09:08:18
|
albertvillanova
|
[
"enhancement"
] |
As discussed with @stas00 and @lhoestq, allowing the deletion of extracted files would save a great amount of disk space to typical user.
| false
|
918,678,578
|
https://api.github.com/repos/huggingface/datasets/issues/2480
|
https://github.com/huggingface/datasets/issues/2480
| 2,480
|
Set download/extracted paths configurable
|
open
| 1
| 2021-06-11T12:20:24
| 2021-06-15T14:23:49
| null |
albertvillanova
|
[
"enhancement"
] |
As discussed with @stas00 and @lhoestq, setting these paths configurable may allow to overcome disk space limitation on different partitions/drives.
TODO:
- [x] Set configurable extracted datasets path: #2487
- [x] Set configurable downloaded datasets path: #2488
- [ ] Set configurable "incomplete" datasets path?
| false
|
918,672,431
|
https://api.github.com/repos/huggingface/datasets/issues/2479
|
https://github.com/huggingface/datasets/pull/2479
| 2,479
|
❌ load_datasets ❌
|
closed
| 0
| 2021-06-11T12:14:36
| 2021-06-11T14:46:25
| 2021-06-11T14:46:25
|
julien-c
|
[] | true
|
|
918,507,510
|
https://api.github.com/repos/huggingface/datasets/issues/2478
|
https://github.com/huggingface/datasets/issues/2478
| 2,478
|
Create release script
|
open
| 1
| 2021-06-11T09:38:02
| 2023-07-20T13:22:23
| null |
albertvillanova
|
[
"enhancement"
] |
Create a script so that releases can be done automatically (as done in `transformers`).
| false
|
918,334,431
|
https://api.github.com/repos/huggingface/datasets/issues/2477
|
https://github.com/huggingface/datasets/pull/2477
| 2,477
|
Fix docs custom stable version
|
closed
| 4
| 2021-06-11T07:26:03
| 2021-06-14T09:14:20
| 2021-06-14T08:20:18
|
albertvillanova
|
[] |
Currently docs default version is 1.5.0. This PR fixes this and sets the latest version instead.
| true
|
917,686,662
|
https://api.github.com/repos/huggingface/datasets/issues/2476
|
https://github.com/huggingface/datasets/pull/2476
| 2,476
|
Add TimeDial
|
closed
| 1
| 2021-06-10T18:33:07
| 2021-07-30T12:57:54
| 2021-07-30T12:57:54
|
bhavitvyamalik
|
[] |
Dataset: https://github.com/google-research-datasets/TimeDial
To-Do: Update README.md and add YAML tags
| true
|
917,650,882
|
https://api.github.com/repos/huggingface/datasets/issues/2475
|
https://github.com/huggingface/datasets/issues/2475
| 2,475
|
Issue in timit_asr database
|
closed
| 2
| 2021-06-10T18:05:29
| 2021-06-13T08:13:50
| 2021-06-13T08:13:13
|
hrahamim
|
[
"bug"
] |
## Describe the bug
I am trying to load the timit_asr dataset however only the first record is shown (duplicated over all the rows).
I am using the next code line
dataset = load_dataset(“timit_asr”, split=“test”).shuffle().select(range(10))
The above code result with the same sentence duplicated ten times.
It also happens when I use the dataset viewer at Streamlit .
## Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset(“timit_asr”, split=“test”).shuffle().select(range(10))
data = dataset.to_pandas()
# Sample code to reproduce the bug
```
## Expected results
table with different row information
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.4.1 (also occur in the latest version)
- Platform: Linux-4.15.0-143-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- Tensorflow version (GPU?): 1.15.3 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| false
|
917,622,055
|
https://api.github.com/repos/huggingface/datasets/issues/2474
|
https://github.com/huggingface/datasets/issues/2474
| 2,474
|
cache_dir parameter for load_from_disk ?
|
closed
| 4
| 2021-06-10T17:39:36
| 2022-02-16T14:55:01
| 2022-02-16T14:55:00
|
chbensch
|
[
"enhancement"
] |
**Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _load_from_disk_ function, the data gets cached to the VM's disk:
`
from datasets import load_from_disk
myPreprocessedData = load_from_disk("/content/gdrive/MyDrive/ASR_data/myPreprocessedData")
`
I know that chaching on google drive could slow down learning. But at least it would run.
**Describe the solution you'd like**
Add cache_Dir parameter to the load_from_disk function.
**Describe alternatives you've considered**
It looks like you could write a custom loading script for the load_dataset function. But this seems to be much too complex for my use case. Is there perhaps a template here that uses the load_from_disk function?
| false
|
917,538,629
|
https://api.github.com/repos/huggingface/datasets/issues/2473
|
https://github.com/huggingface/datasets/pull/2473
| 2,473
|
Add Disfl-QA
|
closed
| 2
| 2021-06-10T16:18:00
| 2021-07-29T11:56:19
| 2021-07-29T11:56:18
|
bhavitvyamalik
|
[] |
Dataset: https://github.com/google-research-datasets/disfl-qa
To-Do: Update README.md and add YAML tags
| true
|
917,463,821
|
https://api.github.com/repos/huggingface/datasets/issues/2472
|
https://github.com/huggingface/datasets/issues/2472
| 2,472
|
Fix automatic generation of Zenodo DOI
|
closed
| 4
| 2021-06-10T15:15:46
| 2021-06-14T16:49:42
| 2021-06-14T16:49:42
|
albertvillanova
|
[
"bug"
] |
After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published".
I have contacted Zenodo support to fix this issue.
TODO:
- [x] Check with Zenodo to fix the issue
- [x] Check BibTeX entry is right
| false
|
917,067,165
|
https://api.github.com/repos/huggingface/datasets/issues/2471
|
https://github.com/huggingface/datasets/issues/2471
| 2,471
|
Fix PermissionError on Windows when using tqdm >=4.50.0
|
closed
| 0
| 2021-06-10T08:31:49
| 2021-06-11T15:11:50
| 2021-06-11T15:11:50
|
albertvillanova
|
[
"bug"
] |
See: https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111
```
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process
```
| false
|
916,724,260
|
https://api.github.com/repos/huggingface/datasets/issues/2470
|
https://github.com/huggingface/datasets/issues/2470
| 2,470
|
Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
|
closed
| 6
| 2021-06-09T22:40:22
| 2021-07-01T09:34:54
| 2021-07-01T09:11:13
|
mbforbes
|
[
"bug"
] |
## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any tips greatly appreciated, I'm happy to provide more info if it would helps us diagnose.
## Steps to reproduce the bug
```python
# this function will be applied with map()
def tokenize_function(examples):
return tokenizer(
examples["text"],
padding=PaddingStrategy.DO_NOT_PAD,
truncation=True,
)
# data_files is a Dict[str, str] mapping name -> path
datasets = load_dataset("text", data_files={...})
# this is where the error happens if num_proc = 16,
# but is fine if num_proc = 1
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=num_workers,
)
```
## Expected results
The `map()` function succeeds with `num_proc` > 1.
## Actual results


## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2
- Platform: Linux-5.4.0-73-generic-x86_64-with-glibc2.31
- Python version: 3.9.5
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes, but I think N/A for this issue
- Using distributed or parallel set-up in script?: Multi-GPU on one machine, but I think also N/A for this issue
| false
|
916,440,418
|
https://api.github.com/repos/huggingface/datasets/issues/2469
|
https://github.com/huggingface/datasets/pull/2469
| 2,469
|
Bump tqdm version
|
closed
| 2
| 2021-06-09T17:24:40
| 2021-06-11T15:03:42
| 2021-06-11T15:03:36
|
lewtun
|
[] | true
|
|
916,427,320
|
https://api.github.com/repos/huggingface/datasets/issues/2468
|
https://github.com/huggingface/datasets/pull/2468
| 2,468
|
Implement ClassLabel encoding in JSON loader
|
closed
| 1
| 2021-06-09T17:08:54
| 2021-06-28T15:39:54
| 2021-06-28T15:05:35
|
albertvillanova
|
[] |
Close #2365.
| true
|
915,914,098
|
https://api.github.com/repos/huggingface/datasets/issues/2466
|
https://github.com/huggingface/datasets/pull/2466
| 2,466
|
change udpos features structure
|
closed
| 2
| 2021-06-09T08:03:31
| 2021-06-18T11:55:09
| 2021-06-16T10:41:37
|
cosmeowpawlitan
|
[] |
The structure is change such that each example is a sentence
The change is done for issues:
#2061
#2444
Close #2061 , close #2444.
| true
|
915,525,071
|
https://api.github.com/repos/huggingface/datasets/issues/2465
|
https://github.com/huggingface/datasets/pull/2465
| 2,465
|
adding masahaner dataset
|
closed
| 3
| 2021-06-08T21:20:25
| 2021-06-14T14:59:05
| 2021-06-14T14:59:05
|
dadelani
|
[] |
Adding Masakhane dataset https://github.com/masakhane-io/masakhane-ner
@lhoestq , can you please review
| true
|
915,485,601
|
https://api.github.com/repos/huggingface/datasets/issues/2464
|
https://github.com/huggingface/datasets/pull/2464
| 2,464
|
fix: adjusting indexing for the labels.
|
closed
| 1
| 2021-06-08T20:47:25
| 2021-06-09T10:15:46
| 2021-06-09T09:10:28
|
drugilsberg
|
[] |
The labels index were mismatching the actual ones used in the dataset. Specifically `0` is used for `SUPPORTS` and `1` is used for `REFUTES`
After this change, the `README.md` now reflects the content of `dataset_infos.json`.
Signed-off-by: Matteo Manica <drugilsberg@gmail.com>
| true
|
915,454,788
|
https://api.github.com/repos/huggingface/datasets/issues/2463
|
https://github.com/huggingface/datasets/pull/2463
| 2,463
|
Fix proto_qa download link
|
closed
| 0
| 2021-06-08T20:23:16
| 2021-06-10T12:49:56
| 2021-06-10T08:31:10
|
mariosasko
|
[] |
Fixes #2459
Instead of updating the path, this PR fixes a commit hash as suggested by @lhoestq.
| true
|
915,384,613
|
https://api.github.com/repos/huggingface/datasets/issues/2462
|
https://github.com/huggingface/datasets/issues/2462
| 2,462
|
Merge DatasetDict and Dataset
|
open
| 2
| 2021-06-08T19:22:04
| 2023-08-16T09:34:34
| null |
albertvillanova
|
[
"enhancement",
"generic discussion"
] |
As discussed in #2424 and #2437 (please see there for detailed conversation):
- It would be desirable to improve UX with respect the confusion between DatasetDict and Dataset.
- The difference between Dataset and DatasetDict is an additional abstraction complexity that confuses "typical" end users.
- A user expects a "Dataset" (whatever it contains multiple or a single split) and maybe it could be interesting to try to simplify the user-facing API as much as possible to hide this complexity from the end user.
Here is a proposal for discussion and refined (and potential abandon if it's not good enough):
- let's consider that a DatasetDict is also a Dataset with the various split concatenated one after the other
- let's disallow the use of integers in split names (probably not a very big breaking change)
- when you index with integers you access the examples progressively in split after the other is finished (in a deterministic order)
- when you index with strings/split name you have the same behavior as now (full backward compat)
- let's then also have all the methods of a Dataset on the DatasetDict
The end goal would be to merge both Dataset and DatasetDict object in a single object that would be (pretty much totally) backward compatible with both.
There are a few things that we could discuss if we want to merge Dataset and DatasetDict:
1. what happens if you index by a string ? Does it return the column or the split ? We could disallow conflicts between column names and split names to avoid ambiguities. It can be surprising to be able to get a column or a split using the same indexing feature
```
from datasets import load_dataset
dataset = load_dataset(...)
dataset["train"]
dataset["input_ids"]
```
2. what happens when you iterate over the object ? I guess it should iterate over the examples as a Dataset object, but a DatasetDict used to iterate over the splits as they are the dictionary keys. This is a breaking change that we can discuss.
Moreover regarding your points:
- integers are not allowed as split names already
- it's definitely doable to have all the methods. Maybe some of them like train_test_split that is currently only available for Dataset can be tweaked to work for a split dataset
cc: @thomwolf @lhoestq
| false
|
915,286,150
|
https://api.github.com/repos/huggingface/datasets/issues/2461
|
https://github.com/huggingface/datasets/pull/2461
| 2,461
|
Support sliced list arrays in cast
|
closed
| 0
| 2021-06-08T17:38:47
| 2021-06-08T17:56:24
| 2021-06-08T17:56:23
|
lhoestq
|
[] |
There is this issue in pyarrow:
```python
import pyarrow as pa
arr = pa.array([[i * 10] for i in range(4)])
arr.cast(pa.list_(pa.int32())) # works
arr = arr.slice(1)
arr.cast(pa.list_(pa.int32())) # fails
# ArrowNotImplementedError("Casting sliced lists (non-zero offset) not yet implemented")
```
However in `Dataset.cast` we slice tables to cast their types (it's memory intensive), so we have the same issue.
Because of this it is currently not possible to cast a Dataset with a Sequence feature type (unless the table is small enough to not be sliced).
In this PR I fixed this by resetting the offset of `pyarrow.ListArray` arrays to zero in the table before casting.
I used `pyarrow.compute.subtract` function to update the offsets of the ListArray.
cc @abhi1thakur @SBrandeis
| true
|
915,268,536
|
https://api.github.com/repos/huggingface/datasets/issues/2460
|
https://github.com/huggingface/datasets/pull/2460
| 2,460
|
Revert default in-memory for small datasets
|
closed
| 1
| 2021-06-08T17:14:23
| 2021-06-08T18:04:14
| 2021-06-08T17:55:43
|
albertvillanova
|
[
"enhancement"
] |
Close #2458
| true
|
915,222,015
|
https://api.github.com/repos/huggingface/datasets/issues/2459
|
https://github.com/huggingface/datasets/issues/2459
| 2,459
|
`Proto_qa` hosting seems to be broken
|
closed
| 1
| 2021-06-08T16:16:32
| 2021-06-10T08:31:09
| 2021-06-10T08:31:09
|
VictorSanh
|
[
"bug"
] |
## Describe the bug
The hosting (on Github) of the `proto_qa` dataset seems broken. I haven't investigated more yet, just flagging it for now.
@zaidalyafeai if you want to dive into it, I think it's just a matter of changing the links in `proto_qa.py`
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("proto_qa")
```
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/load.py", line 751, in load_dataset
use_auth_token=use_auth_token,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 630, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/hf/.cache/huggingface/modules/datasets_modules/datasets/proto_qa/445346efaad5c5f200ecda4aa7f0fb50ff1b55edde3003be424a2112c3e8102e/proto_qa.py", line 131, in _split_generators
train_fpath = dl_manager.download(_URLs[self.config.name]["train"])
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 199, in download
num_proc=download_config.num_proc,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 195, in map_nested
return function(data_struct)
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 218, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/iesl/protoqa-data/master/data/train/protoqa_train.jsonl
```
| false
|
915,199,693
|
https://api.github.com/repos/huggingface/datasets/issues/2458
|
https://github.com/huggingface/datasets/issues/2458
| 2,458
|
Revert default in-memory for small datasets
|
closed
| 1
| 2021-06-08T15:51:41
| 2021-06-08T18:57:11
| 2021-06-08T17:55:43
|
albertvillanova
|
[
"enhancement"
] |
Users are reporting issues and confusion about setting default in-memory to True for small datasets.
We see 2 clear use cases of Datasets:
- the "canonical" way, where you can work with very large datasets, as they are memory-mapped and cached (after every transformation)
- some edge cases (speed benchmarks, interactive/exploratory analysis,...), where default in-memory can explicitly be enabled, and no caching will be done
After discussing with @lhoestq we have agreed to:
- revert this feature (implemented in #2182)
- explain in the docs how to optimize speed/performance by setting default in-memory
cc: @stas00 https://github.com/huggingface/datasets/pull/2409#issuecomment-856210552
| false
|
915,079,441
|
https://api.github.com/repos/huggingface/datasets/issues/2457
|
https://github.com/huggingface/datasets/pull/2457
| 2,457
|
Add align_labels_with_mapping function
|
closed
| 5
| 2021-06-08T13:54:00
| 2022-01-12T08:57:41
| 2021-06-17T09:56:52
|
lewtun
|
[] |
This PR adds a helper function to align the `label2id` mapping between a `datasets.Dataset` and a classifier (e.g. a transformer with a `PretrainedConfig.label2id` dict), with the alignment performed on the dataset itself.
This will help us with the Hub evaluation, where we won't know in advance whether a model that is fine-tuned on say MNLI has the same mappings as the MNLI dataset we load from `datasets`.
An example where this is needed is if we naively try to evaluate `microsoft/deberta-base-mnli` on `mnli` because the model config has the following mappings:
```python
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
"2": "ENTAILMENT"
},
"label2id": {
"CONTRADICTION": 0,
"ENTAILMENT": 2,
"NEUTRAL": 1
}
```
while the `mnli` dataset has the `contradiction` and `neutral` labels swapped:
```python
id2label = {0: 'entailment', 1: 'neutral', 2: 'contradiction'}
label2id = {'contradiction': 2, 'entailment': 0, 'neutral': 1}
```
As a result, we get a much lower accuracy during evaluation:
```python
from datasets import load_dataset
from transformers.trainer_utils import EvalPrediction
from transformers import AutoModelForSequenceClassification, Trainer
# load dataset for evaluation
mnli = load_dataset("glue", "mnli", split="test")
# load model
model_ckpt = "microsoft/deberta-base-mnli"
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
# preprocess, create trainer ...
mnli_enc = ...
trainer = Trainer(model, args=args, tokenizer=tokenizer)
# generate preds
preds = trainer.predict(mnli_enc)
# preds.label_ids misalinged with model.config => returns wrong accuracy (too low)!
compute_metrics(EvalPrediction(preds.predictions, preds.label_ids))
```
The fix is to use the helper function before running the evaluation to make sure the label IDs are aligned:
```python
mnli_enc_aligned = mnli_enc.align_labels_with_mapping(label2id=config.label2id, label_column="label")
# preds now aligned and everyone is happy :)
preds = trainer.predict(mnli_enc_aligned)
```
cc @thomwolf @lhoestq
| true
|
914,709,293
|
https://api.github.com/repos/huggingface/datasets/issues/2456
|
https://github.com/huggingface/datasets/pull/2456
| 2,456
|
Fix cross-reference typos in documentation
|
closed
| 0
| 2021-06-08T09:45:14
| 2021-06-08T17:41:37
| 2021-06-08T17:41:36
|
albertvillanova
|
[] |
Fix some minor typos in docs that avoid the creation of cross-reference links.
| true
|
914,177,468
|
https://api.github.com/repos/huggingface/datasets/issues/2455
|
https://github.com/huggingface/datasets/pull/2455
| 2,455
|
Update version in xor_tydi_qa.py
|
closed
| 1
| 2021-06-08T02:23:45
| 2021-06-14T15:35:25
| 2021-06-14T15:35:25
|
changjonathanc
|
[] |
Fix #2449
@lhoestq Should I revert to the old `dummy/1.0.0` or delete it and keep only `dummy/1.1.0`?
| true
|
913,883,631
|
https://api.github.com/repos/huggingface/datasets/issues/2454
|
https://github.com/huggingface/datasets/pull/2454
| 2,454
|
Rename config and environment variable for in memory max size
|
closed
| 1
| 2021-06-07T19:21:08
| 2021-06-07T20:43:46
| 2021-06-07T20:43:46
|
albertvillanova
|
[] |
As discussed in #2409, both config and environment variable have been renamed.
cc: @stas00, huggingface/transformers#12056
| true
|
913,729,258
|
https://api.github.com/repos/huggingface/datasets/issues/2453
|
https://github.com/huggingface/datasets/pull/2453
| 2,453
|
Keep original features order
|
closed
| 5
| 2021-06-07T16:26:38
| 2021-06-15T18:05:36
| 2021-06-15T15:43:48
|
albertvillanova
|
[] |
When loading a Dataset from a JSON file whose column names are not sorted alphabetically, we should get the same column name order, whether we pass features (in the same order as in the file) or not.
I found this issue while working on #2366.
| true
|
913,603,877
|
https://api.github.com/repos/huggingface/datasets/issues/2452
|
https://github.com/huggingface/datasets/issues/2452
| 2,452
|
MRPC test set differences between torch and tensorflow datasets
|
closed
| 1
| 2021-06-07T14:20:26
| 2021-06-07T14:34:32
| 2021-06-07T14:34:32
|
FredericOdermatt
|
[
"bug"
] |
## Describe the bug
When using `load_dataset("glue", "mrpc")` to load the MRPC dataset, the test set includes the labels. When using `tensorflow_datasets.load('glue/{}'.format('mrpc'))` to load the dataset the test set does not contain the labels. There should be consistency between torch and tensorflow ways of importing the GLUE datasets.
## Steps to reproduce the bug
Minimal working code
```python
from datasets import load_dataset
import tensorflow as tf
import tensorflow_datasets
# torch
dataset = load_dataset("glue", "mrpc")
# tf
data = tensorflow_datasets.load('glue/{}'.format('mrpc'))
data = list(data['test'].as_numpy_iterator())
for i in range(40,50):
tf_sentence1 = data[i]['sentence1'].decode("utf-8")
tf_sentence2 = data[i]['sentence2'].decode("utf-8")
tf_label = data[i]['label']
index = data[i]['idx']
print('Index {}'.format(index))
torch_sentence1 = dataset['test']['sentence1'][index]
torch_sentence2 = dataset['test']['sentence2'][index]
torch_label = dataset['test']['label'][index]
print('Tensorflow: \n\tSentence1 {}\n\tSentence2 {}\n\tLabel {}'.format(tf_sentence1, tf_sentence2, tf_label))
print('Torch: \n\tSentence1 {}\n\tSentence2 {}\n\tLabel {}'.format(torch_sentence1, torch_sentence2, torch_label))
```
Sample output
```
Index 954
Tensorflow:
Sentence1 Sabri Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate yesterday on charges of violating U.S. arms-control laws .
Sentence2 The elder Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate Wednesday on charges of violating U.S. arms control laws .
Label -1
Torch:
Sentence1 Sabri Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate yesterday on charges of violating U.S. arms-control laws .
Sentence2 The elder Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate Wednesday on charges of violating U.S. arms control laws .
Label 1
Index 711
Tensorflow:
Sentence1 Others keep records sealed for as little as five years or as much as 30 .
Sentence2 Some states make them available immediately ; others keep them sealed for as much as 30 years .
Label -1
Torch:
Sentence1 Others keep records sealed for as little as five years or as much as 30 .
Sentence2 Some states make them available immediately ; others keep them sealed for as much as 30 years .
Label 0
```
## Expected results
I would expect the datasets to be independent of whether I am working with torch or tensorflow.
## Actual results
Test set labels are provided in the `datasets.load_datasets()` for MRPC. However MRPC is the only task where the test set labels are not -1.
## Environment info
- `datasets` version: 1.7.0
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyArrow version: 3.0.0
| false
|
913,263,340
|
https://api.github.com/repos/huggingface/datasets/issues/2451
|
https://github.com/huggingface/datasets/pull/2451
| 2,451
|
Mention that there are no answers in adversarial_qa test set
|
closed
| 0
| 2021-06-07T08:13:57
| 2021-06-07T08:34:14
| 2021-06-07T08:34:13
|
lhoestq
|
[] |
As mention in issue https://github.com/huggingface/datasets/issues/2447, there are no answers in the test set
| true
|
912,890,291
|
https://api.github.com/repos/huggingface/datasets/issues/2450
|
https://github.com/huggingface/datasets/issues/2450
| 2,450
|
BLUE file not found
|
closed
| 2
| 2021-06-06T17:01:54
| 2021-06-07T10:46:15
| 2021-06-07T10:46:15
|
mirfan899
|
[] |
Hi, I'm having the following issue when I try to load the `blue` metric.
```shell
import datasets
metric = datasets.load_metric('blue')
Traceback (most recent call last):
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 320, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.7.0/metrics/blue/blue.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 332, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/metrics/blue/blue.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 605, in load_metric
dataset=False,
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 343, in prepare_module
combined_path, github_file_path
FileNotFoundError: Couldn't find file locally at blue/blue.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.7.0/metrics/blue/blue.py.
The file is also not present on the master branch on github.
```
Here is dataset installed version info
```shell
pip freeze | grep datasets
datasets==1.7.0
```
| false
|
912,751,752
|
https://api.github.com/repos/huggingface/datasets/issues/2449
|
https://github.com/huggingface/datasets/pull/2449
| 2,449
|
Update `xor_tydi_qa` url to v1.1
|
closed
| 6
| 2021-06-06T09:44:58
| 2021-06-07T15:16:21
| 2021-06-07T08:31:04
|
changjonathanc
|
[] |
The dataset is updated and the old url no longer works. So I updated it.
I faced a bug while trying to fix this. Documenting the solution here. Maybe we can add it to the doc (`CONTRIBUTING.md` and `ADD_NEW_DATASET.md`).
> And to make the command work without the ExpectedMoreDownloadedFiles error, you just need to use the --ignore_verifications flag.
https://github.com/huggingface/datasets/issues/2076#issuecomment-803904366
| true
|
912,360,109
|
https://api.github.com/repos/huggingface/datasets/issues/2448
|
https://github.com/huggingface/datasets/pull/2448
| 2,448
|
Fix flores download link
|
closed
| 0
| 2021-06-05T17:30:24
| 2021-06-08T20:02:58
| 2021-06-07T08:18:25
|
mariosasko
|
[] | true
|
|
912,299,527
|
https://api.github.com/repos/huggingface/datasets/issues/2447
|
https://github.com/huggingface/datasets/issues/2447
| 2,447
|
dataset adversarial_qa has no answers in the "test" set
|
closed
| 2
| 2021-06-05T14:57:38
| 2021-06-07T11:13:07
| 2021-06-07T11:13:07
|
bjascob
|
[
"bug"
] |
## Describe the bug
When loading the adversarial_qa dataset the 'test' portion has no answers. Only the 'train' and 'validation' portions do. This occurs with all four of the configs ('adversarialQA', 'dbidaf', 'dbert', 'droberta')
## Steps to reproduce the bug
```
from datasets import load_dataset
examples = load_dataset('adversarial_qa', 'adversarialQA', script_version="master")['test']
print('Loaded {:,} examples'.format(len(examples)))
has_answers = 0
for e in examples:
if e['answers']['text']:
has_answers += 1
print('{:,} have answers'.format(has_answers))
>>> Loaded 3,000 examples
>>> 0 have answers
examples = load_dataset('adversarial_qa', 'adversarialQA', script_version="master")['validation']
<...code above...>
>>> Loaded 3,000 examples
>>> 3,000 have answers
```
## Expected results
If 'test' is a valid dataset, it should have answers. Also note that all of the 'train' and 'validation' sets have answers, there are no "no answer" questions with this set (not sure if this is correct or not).
## Environment info
- `datasets` version: 1.7.0
- Platform: Linux-5.8.0-53-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyArrow version: 1.0.0
| false
|
911,635,399
|
https://api.github.com/repos/huggingface/datasets/issues/2446
|
https://github.com/huggingface/datasets/issues/2446
| 2,446
|
`yelp_polarity` is broken
|
closed
| 2
| 2021-06-04T15:44:29
| 2021-06-04T18:56:47
| 2021-06-04T18:56:47
|
JetRunner
|
[] |

| false
|
911,577,578
|
https://api.github.com/repos/huggingface/datasets/issues/2445
|
https://github.com/huggingface/datasets/pull/2445
| 2,445
|
Fix broken URLs for bn_hate_speech and covid_tweets_japanese
|
closed
| 2
| 2021-06-04T14:53:35
| 2021-06-04T17:39:46
| 2021-06-04T17:39:45
|
lewtun
|
[] |
Closes #2388
| true
|
911,297,139
|
https://api.github.com/repos/huggingface/datasets/issues/2444
|
https://github.com/huggingface/datasets/issues/2444
| 2,444
|
Sentence Boundaries missing in Dataset: xtreme / udpos
|
closed
| 2
| 2021-06-04T09:10:26
| 2021-06-18T11:53:43
| 2021-06-18T11:53:43
|
cosmeowpawlitan
|
[
"bug"
] |
I was browsing through annotation guidelines, as suggested by the datasets introduction.
The guidlines saids "There must be exactly one blank line after every sentence, including the last sentence in the file. Empty sentences are not allowed." in the [Sentence Boundaries and Comments section](https://universaldependencies.org/format.html#sentence-boundaries-and-comments)
But the sentence boundaries seems not to be represented by huggingface datasets features well. I found out that multiple sentence are concatenated together as a 1D array, without any delimiter.
PAN-x, which is another token classification subset from xtreme do represent the sentence boundary using a 2D array.
You may compare in PAN-x.en and udpos.English in the explorer:
https://huggingface.co/datasets/viewer/?dataset=xtreme
| false
|
909,983,574
|
https://api.github.com/repos/huggingface/datasets/issues/2443
|
https://github.com/huggingface/datasets/issues/2443
| 2,443
|
Some tests hang on Windows
|
closed
| 3
| 2021-06-03T00:27:30
| 2021-06-28T08:47:39
| 2021-06-28T08:47:39
|
mariosasko
|
[
"bug"
] |
Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to address these issues/PRs. IMO throwing an error is too harsh, but maybe we can emit a warning in the top-level `__init__.py ` on startup if long paths are not enabled.
| false
|
909,677,029
|
https://api.github.com/repos/huggingface/datasets/issues/2442
|
https://github.com/huggingface/datasets/pull/2442
| 2,442
|
add english language tags for ~100 datasets
|
closed
| 1
| 2021-06-02T16:24:56
| 2021-06-04T09:51:40
| 2021-06-04T09:51:39
|
VictorSanh
|
[] |
As discussed on Slack, I have manually checked for ~100 datasets that they have at least one subset in English. This information was missing so adding into the READMEs.
Note that I didn't check all the subsets so it's possible that some of the datasets have subsets in other languages than English...
| true
|
908,554,713
|
https://api.github.com/repos/huggingface/datasets/issues/2441
|
https://github.com/huggingface/datasets/issues/2441
| 2,441
|
DuplicatedKeysError on personal dataset
|
closed
| 2
| 2021-06-01T17:59:41
| 2021-06-04T23:50:03
| 2021-06-04T23:50:03
|
lucaguarro
|
[
"bug"
] |
## Describe the bug
Ever since today, I have been getting a DuplicatedKeysError while trying to load my dataset from my own script.
Error returned when running this line: `dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')`
Note that my script was working fine with earlier versions of the Datasets library. Cannot say with 100% certainty if I have been doing something wrong with my dataset script this whole time or if this is simply a bug with the new version of datasets.
## Steps to reproduce the bug
I cannot provide code to reproduce the error as I am working with my own dataset. I can however provide my script if requested.
## Expected results
For my data to be loaded.
## Actual results
**DuplicatedKeysError** exception is raised
```
Downloading and preparing dataset good_reads_practice_dataset/main_domain (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/good_reads_practice_dataset/main_domain/1.1.0/64ff7c3fee2693afdddea75002eb6887d4fedc3d812ae3622128c8504ab21655...
---------------------------------------------------------------------------
DuplicatedKeysError Traceback (most recent call last)
<ipython-input-6-c342ea0dae9d> in <module>()
----> 1 dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')
5 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs)
749 try_from_hf_gcs=try_from_hf_gcs,
750 base_path=base_path,
--> 751 use_auth_token=use_auth_token,
752 )
753
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
573 if not downloaded_from_gcs:
574 self._download_and_prepare(
--> 575 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
576 )
577 # Sync info
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
650 try:
651 # Prepare split will record examples associated to the split
--> 652 self._prepare_split(split_generator, **prepare_split_kwargs)
653 except OSError as e:
654 raise OSError(
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator)
990 writer.write(example, key)
991 finally:
--> 992 num_examples, num_bytes = writer.finalize()
993
994 split_generator.split_info.num_examples = num_examples
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in finalize(self, close_stream)
407 # In case current_examples < writer_batch_size, but user uses finalize()
408 if self._check_duplicates:
--> 409 self.check_duplicate_keys()
410 # Re-intializing to empty list for next batch
411 self.hkey_record = []
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self)
347 for hash, key in self.hkey_record:
348 if hash in tmp_record:
--> 349 raise DuplicatedKeysError(key)
350 else:
351 tmp_record.add(hash)
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 0
Keys should be unique and deterministic in nature
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.7.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.9
- PyArrow version: 3.0.0
| false
|
908,521,954
|
https://api.github.com/repos/huggingface/datasets/issues/2440
|
https://github.com/huggingface/datasets/issues/2440
| 2,440
|
Remove `extended` field from dataset tagger
|
closed
| 4
| 2021-06-01T17:18:42
| 2021-06-09T09:06:31
| 2021-06-09T09:06:30
|
lewtun
|
[
"bug"
] |
## Describe the bug
While working on #2435 I used the [dataset tagger](https://huggingface.co/datasets/tagging/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included:
```
dataset_name = 'arcd'
@pytest.mark.parametrize("dataset_name", get_changed_datasets(repo_path))
def test_changed_dataset_card(dataset_name):
card_path = repo_path / "datasets" / dataset_name / "README.md"
assert card_path.exists()
error_messages = []
try:
ReadMe.from_readme(card_path)
except Exception as readme_error:
error_messages.append(f"The following issues have been found in the dataset cards:\nREADME:\n{readme_error}")
try:
DatasetMetadata.from_readme(card_path)
except Exception as metadata_error:
error_messages.append(
f"The following issues have been found in the dataset cards:\nYAML tags:\n{metadata_error}"
)
if error_messages:
> raise ValueError("\n".join(error_messages))
E ValueError: The following issues have been found in the dataset cards:
E YAML tags:
E __init__() got an unexpected keyword argument 'extended'
tests/test_dataset_cards.py:70: ValueError
```
Consider either removing this tag from the tagger or including it as part of the validation step in the CI.
cc @yjernite
| false
|
908,511,983
|
https://api.github.com/repos/huggingface/datasets/issues/2439
|
https://github.com/huggingface/datasets/pull/2439
| 2,439
|
Better error message when trying to access elements of a DatasetDict without specifying the split
|
closed
| 0
| 2021-06-01T17:04:32
| 2021-06-15T16:03:23
| 2021-06-07T08:54:35
|
lhoestq
|
[] |
As mentioned in #2437 it'd be nice to to have an indication to the users when they try to access an element of a DatasetDict without specifying the split name.
cc @thomwolf
| true
|
908,461,914
|
https://api.github.com/repos/huggingface/datasets/issues/2438
|
https://github.com/huggingface/datasets/pull/2438
| 2,438
|
Fix NQ features loading: reorder fields of features to match nested fields order in arrow data
|
closed
| 0
| 2021-06-01T16:09:30
| 2021-06-04T09:02:31
| 2021-06-04T09:02:31
|
lhoestq
|
[] |
As mentioned in #2401, there is an issue when loading the features of `natural_questions` since the order of the nested fields in the features don't match. The order is important since it matters for the underlying arrow schema.
To fix that I re-order the features based on the arrow schema:
```python
inferred_features = Features.from_arrow_schema(arrow_table.schema)
self.info.features = self.info.features.reorder_fields_as(inferred_features)
assert self.info.features.type == inferred_features.type
```
The re-ordering is a recursive function. It takes into account that the `Sequence` feature type is a struct of list and not a list of struct.
Now it's possible to load `natural_questions` again :)
| true
|
908,108,882
|
https://api.github.com/repos/huggingface/datasets/issues/2437
|
https://github.com/huggingface/datasets/pull/2437
| 2,437
|
Better error message when using the wrong load_from_disk
|
closed
| 9
| 2021-06-01T09:43:22
| 2021-06-08T18:03:50
| 2021-06-08T18:03:50
|
lhoestq
|
[] |
As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one.
| true
|
908,100,211
|
https://api.github.com/repos/huggingface/datasets/issues/2436
|
https://github.com/huggingface/datasets/pull/2436
| 2,436
|
Update DatasetMetadata and ReadMe
|
closed
| 0
| 2021-06-01T09:32:37
| 2021-06-14T13:23:27
| 2021-06-14T13:23:26
|
gchhablani
|
[] |
This PR contains the changes discussed in #2395.
**Edit**:
In addition to those changes, I'll be updating the `ReadMe` as follows:
Currently, `Section` has separate parsing and validation error lists. In `.validate()`, we add these lists to the final lists and throw errors.
One way to make `ReadMe` consistent with `DatasetMetadata` and add a separate `.validate()` method is to throw separate parsing and validation errors.
This way, we don't have to throw validation errors, but only parsing errors in `__init__ ()`. We can have an option in `__init__()` to suppress parsing errors so that an object is created for validation. Doing this will allow the user to get all the errors in one go.
In `test_dataset_cards` , we are already catching error messages and appending to a list. This can be done for `ReadMe()` for parsing errors, and `ReadMe(...,suppress_errors=True); readme.validate()` for validation, separately.
**Edit 2**:
The only parsing issue we have as of now is multiple headings at the same level with the same name. I assume this will happen very rarely, but it is still better to throw an error than silently pick one of them. It should be okay to separate it this way.
Wdyt @lhoestq ?
| true
|
907,505,531
|
https://api.github.com/repos/huggingface/datasets/issues/2435
|
https://github.com/huggingface/datasets/pull/2435
| 2,435
|
Insert Extractive QA templates for SQuAD-like datasets
|
closed
| 3
| 2021-05-31T14:09:11
| 2021-06-03T14:34:30
| 2021-06-03T14:32:27
|
lewtun
|
[] |
This PR adds task templates for 9 SQuAD-like templates with the following properties:
* 1 config
* A schema that matches the `squad` one (i.e. same column names, especially for the nested `answers` column because the current implementation does not support casting with mismatched columns. see #2434)
* Less than 20GB (my laptop can't handle more right now)
The aim of this PR is to provide a few datasets to experiment with the task template integration in other libraries / services.
PR #2429 should be merged before this one.
cc @abhi1thakur
| true
|
907,503,557
|
https://api.github.com/repos/huggingface/datasets/issues/2434
|
https://github.com/huggingface/datasets/issues/2434
| 2,434
|
Extend QuestionAnsweringExtractive template to handle nested columns
|
closed
| 2
| 2021-05-31T14:06:51
| 2022-10-05T17:06:28
| 2022-10-05T17:06:28
|
lewtun
|
[
"enhancement"
] |
Currently the `QuestionAnsweringExtractive` task template and `preprare_for_task` only support "flat" features. We should extend the functionality to cover QA datasets like:
* `iapp_wiki_qa_squad`
* `parsinlu_reading_comprehension`
where the nested features differ with those from `squad` and trigger an `ArrowNotImplementedError`:
```
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
<ipython-input-12-50e5b8f69c20> in <module>
----> 1 ds.prepare_for_task("question-answering-extractive")[0]
~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task)
1436 # We found a template so now flush `DatasetInfo` to skip the template update in `DatasetInfo.__post_init__`
1437 dataset.info.task_templates = None
-> 1438 dataset = dataset.cast(features=template.features)
1439 return dataset
1440
~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)
977 format = self.format
978 dataset = self.with_format("arrow")
--> 979 dataset = dataset.map(
980 lambda t: t.cast(schema),
981 batched=True,
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1600
1601 if num_proc is None or num_proc == 1:
-> 1602 return self._map_single(
1603 function=function,
1604 with_indices=with_indices,
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
176 }
177 # apply actual function
--> 178 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
179 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
180 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
395 # Call actual function
396
--> 397 out = func(self, *args, **kwargs)
398
399 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, desc)
1940 ) # Something simpler?
1941 try:
-> 1942 batch = apply_function_on_filtered_inputs(
1943 batch,
1944 indices,
~/git/datasets/src/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
1836 effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset
1837 processed_inputs = (
-> 1838 function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1839 )
1840 if update_data is None:
~/git/datasets/src/datasets/arrow_dataset.py in <lambda>(t)
978 dataset = self.with_format("arrow")
979 dataset = dataset.map(
--> 980 lambda t: t.cast(schema),
981 batched=True,
982 batch_size=batch_size,
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.cast()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.ChunkedArray.cast()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/compute.py in cast(arr, target_type, safe)
241 else:
242 options = CastOptions.unsafe(target_type)
--> 243 return call_function("cast", [arr], options)
244
245
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/_compute.pyx in pyarrow._compute.call_function()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/_compute.pyx in pyarrow._compute.Function.call()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowNotImplementedError: Unsupported cast from struct<answer_end: list<item: int32>, answer_start: list<item: int32>, text: list<item: string>> to struct using function cast_struct
```
| false
|
907,488,711
|
https://api.github.com/repos/huggingface/datasets/issues/2433
|
https://github.com/huggingface/datasets/pull/2433
| 2,433
|
Fix DuplicatedKeysError in adversarial_qa
|
closed
| 0
| 2021-05-31T13:48:47
| 2021-06-01T08:52:11
| 2021-06-01T08:52:11
|
mariosasko
|
[] |
Fixes #2431
| true
|
907,462,881
|
https://api.github.com/repos/huggingface/datasets/issues/2432
|
https://github.com/huggingface/datasets/pull/2432
| 2,432
|
Fix CI six installation on linux
|
closed
| 0
| 2021-05-31T13:15:36
| 2021-05-31T13:17:07
| 2021-05-31T13:17:06
|
lhoestq
|
[] |
For some reason we end up with this error in the linux CI when running pip install .[tests]
```
pip._vendor.resolvelib.resolvers.InconsistentCandidate: Provided candidate AlreadyInstalledCandidate(six 1.16.0 (/usr/local/lib/python3.6/site-packages)) does not satisfy SpecifierRequirement('six>1.9'), SpecifierRequirement('six>1.9'), SpecifierRequirement('six>=1.11'), SpecifierRequirement('six~=1.15'), SpecifierRequirement('six'), SpecifierRequirement('six>=1.5.2'), SpecifierRequirement('six>=1.9.0'), SpecifierRequirement('six>=1.11.0'), SpecifierRequirement('six'), SpecifierRequirement('six>=1.6.1'), SpecifierRequirement('six>=1.9'), SpecifierRequirement('six>=1.5'), SpecifierRequirement('six<2.0'), SpecifierRequirement('six<2.0'), SpecifierRequirement('six'), SpecifierRequirement('six'), SpecifierRequirement('six~=1.15.0'), SpecifierRequirement('six'), SpecifierRequirement('six<2.0,>=1.6.1'), SpecifierRequirement('six'), SpecifierRequirement('six>=1.5.2'), SpecifierRequirement('six>=1.9.0')
```
example CI failure here:
https://app.circleci.com/pipelines/github/huggingface/datasets/6200/workflows/b64fdec9-f9e6-431c-acd7-e9f2c440c568/jobs/38247
The main version requirement comes from tensorflow: `six~=1.15.0`
So I pinned the six version to this.
| true
|
907,413,691
|
https://api.github.com/repos/huggingface/datasets/issues/2431
|
https://github.com/huggingface/datasets/issues/2431
| 2,431
|
DuplicatedKeysError when trying to load adversarial_qa
|
closed
| 1
| 2021-05-31T12:11:19
| 2021-06-01T08:54:03
| 2021-06-01T08:52:11
|
hanss0n
|
[
"bug"
] |
## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
dataset = load_dataset('adversarial_qa', 'adversarialQA')
```
## Expected results
The dataset should be loaded into memory
## Actual results
>DuplicatedKeysError: FAILURE TO GENERATE DATASET !
>Found duplicate Key: 4d3cb5677211ee32895ca9c66dad04d7152254d4
>Keys should be unique and deterministic in nature
>
>
>During handling of the above exception, another exception occurred:
>
>DuplicatedKeysError Traceback (most recent call last)
>
>/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self)
> 347 for hash, key in self.hkey_record:
> 348 if hash in tmp_record:
>--> 349 raise DuplicatedKeysError(key)
> 350 else:
> 351 tmp_record.add(hash)
>
>DuplicatedKeysError: FAILURE TO GENERATE DATASET !
>Found duplicate Key: 4d3cb5677211ee32895ca9c66dad04d7152254d4
>Keys should be unique and deterministic in nature
## Environment info
- `datasets` version: 1.7.0
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyArrow version: 3.0.0
| false
|
907,322,595
|
https://api.github.com/repos/huggingface/datasets/issues/2430
|
https://github.com/huggingface/datasets/pull/2430
| 2,430
|
Add version-specific BibTeX
|
closed
| 4
| 2021-05-31T10:05:42
| 2021-06-08T07:53:22
| 2021-06-08T07:53:22
|
albertvillanova
|
[] |
As pointed out by @lhoestq in #2411, after the creation of the Zenodo DOI for Datasets, a new BibTeX entry is created with each release.
This PR adds a version-specific BibTeX entry, besides the existing one which is generic for the project.
See version-specific BibTeX entry here: https://zenodo.org/record/4817769/export/hx#.YLSyd6j7RPY
| true
|
907,321,665
|
https://api.github.com/repos/huggingface/datasets/issues/2429
|
https://github.com/huggingface/datasets/pull/2429
| 2,429
|
Rename QuestionAnswering template to QuestionAnsweringExtractive
|
closed
| 1
| 2021-05-31T10:04:42
| 2021-05-31T15:57:26
| 2021-05-31T15:57:24
|
lewtun
|
[] |
Following the discussion with @thomwolf in #2255, this PR renames the QA template to distinguish extractive vs abstractive QA. The abstractive template will be added in a future PR.
| true
|
907,169,746
|
https://api.github.com/repos/huggingface/datasets/issues/2428
|
https://github.com/huggingface/datasets/pull/2428
| 2,428
|
Add copyright info for wiki_lingua dataset
|
closed
| 3
| 2021-05-31T07:22:52
| 2021-06-04T10:22:33
| 2021-06-04T10:22:33
|
PhilipMay
|
[] | true
|
|
907,162,923
|
https://api.github.com/repos/huggingface/datasets/issues/2427
|
https://github.com/huggingface/datasets/pull/2427
| 2,427
|
Add copyright info to MLSUM dataset
|
closed
| 2
| 2021-05-31T07:15:57
| 2021-06-04T09:53:50
| 2021-06-04T09:53:50
|
PhilipMay
|
[] | true
|
|
906,473,546
|
https://api.github.com/repos/huggingface/datasets/issues/2426
|
https://github.com/huggingface/datasets/issues/2426
| 2,426
|
Saving Graph/Structured Data in Datasets
|
closed
| 6
| 2021-05-29T13:35:21
| 2021-06-02T01:21:03
| 2021-06-02T01:21:03
|
gsh199449
|
[
"enhancement"
] |
Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python value type when inferring an Arrow data type''.
Although I also know that storing a python dict in pyarrow datasets is not the best practice, but I have no idea about how to save structured data in the Datasets.
Thank you very much for your help.
| false
|
906,385,457
|
https://api.github.com/repos/huggingface/datasets/issues/2425
|
https://github.com/huggingface/datasets/pull/2425
| 2,425
|
Fix Docstring Mistake: dataset vs. metric
|
closed
| 4
| 2021-05-29T06:09:53
| 2021-06-01T08:18:04
| 2021-06-01T08:18:04
|
PhilipMay
|
[] |
PR to fix #2412
| true
|
906,193,679
|
https://api.github.com/repos/huggingface/datasets/issues/2424
|
https://github.com/huggingface/datasets/issues/2424
| 2,424
|
load_from_disk and save_to_disk are not compatible with each other
|
closed
| 6
| 2021-05-28T23:07:10
| 2021-06-08T19:22:32
| 2021-06-08T19:22:32
|
roholazandie
|
[] |
## Describe the bug
load_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the load_from_disk only works on one split
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("art")
dataset.save_to_disk("mydir")
d = Dataset.load_from_disk("mydir")
```
## Expected results
It is expected that these two functions be the reverse of each other without more manipulation
## Actual results
FileNotFoundError: [Errno 2] No such file or directory: 'mydir/art/state.json'
## Environment info
- `datasets` version: 1.6.2
- Platform: Linux-5.4.0-73-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
| false
|
905,935,753
|
https://api.github.com/repos/huggingface/datasets/issues/2423
|
https://github.com/huggingface/datasets/pull/2423
| 2,423
|
add `desc` in `map` for `DatasetDict` object
|
closed
| 3
| 2021-05-28T19:28:44
| 2021-05-31T14:51:23
| 2021-05-31T13:08:04
|
bhavitvyamalik
|
[] |
`desc` in `map` currently only works with `Dataset` objects. This PR adds support for `DatasetDict` objects as well
| true
|
905,568,548
|
https://api.github.com/repos/huggingface/datasets/issues/2422
|
https://github.com/huggingface/datasets/pull/2422
| 2,422
|
Fix save_to_disk nested features order in dataset_info.json
|
closed
| 0
| 2021-05-28T15:03:28
| 2021-05-28T15:26:57
| 2021-05-28T15:26:56
|
lhoestq
|
[] |
Fix issue https://github.com/huggingface/datasets/issues/2267
The order of the nested features matters (pyarrow limitation), but the save_to_disk method was saving the features types as JSON with `sort_keys=True`, which was breaking the order of the nested features.
| true
|
905,549,756
|
https://api.github.com/repos/huggingface/datasets/issues/2421
|
https://github.com/huggingface/datasets/pull/2421
| 2,421
|
doc: fix typo HF_MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
|
closed
| 0
| 2021-05-28T14:52:10
| 2021-06-04T09:52:45
| 2021-06-04T09:52:45
|
borisdayma
|
[] |
MAX_MEMORY_DATASET_SIZE_IN_BYTES should be HF_MAX_MEMORY_DATASET_SIZE_IN_BYTES
| true
|
904,821,772
|
https://api.github.com/repos/huggingface/datasets/issues/2420
|
https://github.com/huggingface/datasets/pull/2420
| 2,420
|
Updated Dataset Description
|
closed
| 0
| 2021-05-28T07:10:51
| 2021-06-10T12:11:35
| 2021-06-10T12:11:35
|
binny-mathew
|
[] |
Added Point of contact information and several other details about the dataset.
| true
|
904,347,339
|
https://api.github.com/repos/huggingface/datasets/issues/2419
|
https://github.com/huggingface/datasets/pull/2419
| 2,419
|
adds license information for DailyDialog.
|
closed
| 5
| 2021-05-27T23:03:42
| 2021-05-31T13:16:52
| 2021-05-31T13:16:52
|
aditya2211
|
[] | true
|
|
904,051,497
|
https://api.github.com/repos/huggingface/datasets/issues/2418
|
https://github.com/huggingface/datasets/pull/2418
| 2,418
|
add utf-8 while reading README
|
closed
| 2
| 2021-05-27T18:12:28
| 2021-06-04T09:55:01
| 2021-06-04T09:55:00
|
bhavitvyamalik
|
[] |
It was causing tests to fail in Windows (see #2416). In Windows, the default encoding is CP1252 which is unable to decode the character byte 0x9d
| true
|
903,956,071
|
https://api.github.com/repos/huggingface/datasets/issues/2417
|
https://github.com/huggingface/datasets/pull/2417
| 2,417
|
Make datasets PEP-561 compliant
|
closed
| 1
| 2021-05-27T16:16:17
| 2021-05-28T13:10:10
| 2021-05-28T13:09:16
|
SBrandeis
|
[] |
Allows to type-check datasets with `mypy` when imported as a third-party library
PEP-561: https://www.python.org/dev/peps/pep-0561
MyPy doc on the subject: https://mypy.readthedocs.io/en/stable/installed_packages.html
| true
|
903,932,299
|
https://api.github.com/repos/huggingface/datasets/issues/2416
|
https://github.com/huggingface/datasets/pull/2416
| 2,416
|
Add KLUE dataset
|
closed
| 7
| 2021-05-27T15:49:51
| 2021-06-09T15:00:02
| 2021-06-04T17:45:15
|
jungwhank
|
[] |
Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
| true
|
903,923,097
|
https://api.github.com/repos/huggingface/datasets/issues/2415
|
https://github.com/huggingface/datasets/issues/2415
| 2,415
|
Cached dataset not loaded
|
closed
| 5
| 2021-05-27T15:40:06
| 2021-06-02T13:15:47
| 2021-06-02T13:15:47
|
borisdayma
|
[
"bug"
] |
## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def filter_by_duration(batch):
return (
batch["duration"] <= 10
and batch["duration"] >= 1
and len(batch["target_text"]) > 5
)
def prepare_dataset(batch):
batch["input_values"] = processor(
batch["speech"], sampling_rate=batch["sampling_rate"][0]
).input_values
with processor.as_target_processor():
batch["labels"] = processor(batch["target_text"]).input_ids
return batch
train_dataset = train_dataset.filter(
filter_by_duration,
remove_columns=["duration"],
num_proc=data_args.preprocessing_num_workers,
)
# PROBLEM HERE -> below function is reexecuted and cache is not loaded
train_dataset = train_dataset.map(
prepare_dataset,
remove_columns=train_dataset.column_names,
batch_size=training_args.per_device_train_batch_size,
batched=True,
num_proc=data_args.preprocessing_num_workers,
)
# Later in script
set_caching_enabled(False)
# apply map on trained model to eval/test sets
```
## Expected results
The cached dataset should always be reloaded.
## Actual results
The function is reexecuted.
I have access to cached files `cache-xxxxx.arrow`.
Is there a way I can somehow load manually 2 versions and see how the hash was created for debug purposes (to know if it's an issue with dataset or function)?
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2
- Platform: Linux-5.8.0-45-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| false
|
903,877,096
|
https://api.github.com/repos/huggingface/datasets/issues/2414
|
https://github.com/huggingface/datasets/pull/2414
| 2,414
|
Update README.md
|
closed
| 2
| 2021-05-27T14:53:19
| 2021-06-28T13:46:14
| 2021-06-28T13:04:56
|
cryoff
|
[] |
Provides description of data instances and dataset features
| true
|
903,777,557
|
https://api.github.com/repos/huggingface/datasets/issues/2413
|
https://github.com/huggingface/datasets/issues/2413
| 2,413
|
AttributeError: 'DatasetInfo' object has no attribute 'task_templates'
|
closed
| 1
| 2021-05-27T13:44:28
| 2021-06-01T01:05:47
| 2021-06-01T01:05:47
|
jungwhank
|
[
"bug"
] |
## Describe the bug
Hello,
I'm trying to add dataset and contribute, but test keep fail with below cli.
` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<my_dataset>`
## Steps to reproduce the bug
It seems like a bug when I see an error with the existing dataset, not the dataset I'm trying to add.
` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<any_dataset>`
## Expected results
All test passed
## Actual results
```
# check that dataset is not empty
self.parent.assertListEqual(sorted(dataset_builder.info.splits.keys()), sorted(dataset))
for split in dataset_builder.info.splits.keys():
# check that loaded datset is not empty
self.parent.assertTrue(len(dataset[split]) > 0)
# check that we can cast features for each task template
> task_templates = dataset_builder.info.task_templates
E AttributeError: 'DatasetInfo' object has no attribute 'task_templates'
tests/test_dataset_common.py:175: AttributeError
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2
- Platform: Darwin-20.4.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| false
|
903,769,151
|
https://api.github.com/repos/huggingface/datasets/issues/2412
|
https://github.com/huggingface/datasets/issues/2412
| 2,412
|
Docstring mistake: dataset vs. metric
|
closed
| 1
| 2021-05-27T13:39:11
| 2021-06-01T08:18:04
| 2021-06-01T08:18:04
|
PhilipMay
|
[] |
This:
https://github.com/huggingface/datasets/blob/d95b95f8cf3cb0cff5f77a675139b584dcfcf719/src/datasets/load.py#L582
Should better be something like:
`a metric identifier on HuggingFace AWS bucket (list all available metrics and ids with ``datasets.list_metrics()``)`
I can provide a PR l8er...
| false
|
903,671,778
|
https://api.github.com/repos/huggingface/datasets/issues/2411
|
https://github.com/huggingface/datasets/pull/2411
| 2,411
|
Add DOI badge to README
|
closed
| 0
| 2021-05-27T12:36:47
| 2021-05-27T13:42:54
| 2021-05-27T13:42:54
|
albertvillanova
|
[] |
Once published the latest release, the DOI badge has been automatically generated by Zenodo.
| true
|
903,613,676
|
https://api.github.com/repos/huggingface/datasets/issues/2410
|
https://github.com/huggingface/datasets/pull/2410
| 2,410
|
fix #2391 add original answers in kilt-TriviaQA
|
closed
| 5
| 2021-05-27T11:54:29
| 2021-06-15T12:35:57
| 2021-06-14T17:29:10
|
PaulLerner
|
[] |
cc @yjernite is it ok like this?
| true
|
903,441,398
|
https://api.github.com/repos/huggingface/datasets/issues/2409
|
https://github.com/huggingface/datasets/pull/2409
| 2,409
|
Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
|
closed
| 14
| 2021-05-27T09:07:00
| 2021-06-08T16:00:55
| 2021-05-27T09:33:41
|
lhoestq
|
[] |
As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_
| true
|
903,422,648
|
https://api.github.com/repos/huggingface/datasets/issues/2408
|
https://github.com/huggingface/datasets/pull/2408
| 2,408
|
Fix head_qa keys
|
closed
| 0
| 2021-05-27T08:50:19
| 2021-05-27T09:05:37
| 2021-05-27T09:05:36
|
lhoestq
|
[] |
There were duplicate in the keys, as mentioned in #2382
| true
|
903,111,755
|
https://api.github.com/repos/huggingface/datasets/issues/2407
|
https://github.com/huggingface/datasets/issues/2407
| 2,407
|
.map() function got an unexpected keyword argument 'cache_file_name'
|
closed
| 3
| 2021-05-27T01:54:26
| 2021-05-27T13:46:40
| 2021-05-27T13:46:40
|
cindyxinyiwang
|
[
"bug"
] |
## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that ".map() function got an unexpected keyword argument 'cache_file_name'".
I believe I'm using the latest dataset 1.6.2. Also seems like the document and the actual code indicates there is an argument 'cache_file_name' for the .map() function.
Here is the code I use
## Steps to reproduce the bug
```datasets = load_from_disk(dataset_path=my_path)
[...]
def tokenize_function(examples):
return tokenizer(examples[text_column_name])
logger.info("Mapping dataset to tokenized dataset.")
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=True,
cache_file_name="my_tokenized_file"
)
```
## Actual results
tokenized_datasets = datasets.map(
TypeError: map() got an unexpected keyword argument 'cache_file_name'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:1.6.2
- Platform:Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.10
- Python version:3.8.5
- PyArrow version:3.0.0
| false
|
902,643,844
|
https://api.github.com/repos/huggingface/datasets/issues/2406
|
https://github.com/huggingface/datasets/issues/2406
| 2,406
|
Add guide on using task templates to documentation
|
closed
| 0
| 2021-05-26T16:28:26
| 2022-10-05T17:07:00
| 2022-10-05T17:07:00
|
lewtun
|
[
"enhancement"
] |
Once we have a stable API on the text classification and question answering task templates, add a guide on how to use them in the documentation.
| false
|
901,227,658
|
https://api.github.com/repos/huggingface/datasets/issues/2405
|
https://github.com/huggingface/datasets/pull/2405
| 2,405
|
Add dataset tags
|
closed
| 1
| 2021-05-25T18:57:29
| 2021-05-26T16:54:16
| 2021-05-26T16:40:07
|
OyvindTafjord
|
[] |
The dataset tags were provided by Peter Clark following the guide.
| true
|
901,179,832
|
https://api.github.com/repos/huggingface/datasets/issues/2404
|
https://github.com/huggingface/datasets/pull/2404
| 2,404
|
Paperswithcode dataset mapping
|
closed
| 2
| 2021-05-25T18:14:26
| 2021-05-26T11:21:25
| 2021-05-26T11:17:18
|
julien-c
|
[] |
This is a continuation of https://github.com/huggingface/huggingface_hub/pull/43, encoded directly inside dataset cards.
As discussed:
- `paperswithcode_id: null` when the dataset doesn't exist on paperswithcode's side.
- I've added this new key at the end of the yaml instead of ordering all keys alphabetically as pyyaml's default. No strong opinion on that one though
| true
|
900,059,014
|
https://api.github.com/repos/huggingface/datasets/issues/2403
|
https://github.com/huggingface/datasets/pull/2403
| 2,403
|
Free datasets with cache file in temp dir on exit
|
closed
| 0
| 2021-05-24T22:15:11
| 2021-05-26T17:25:19
| 2021-05-26T16:39:29
|
mariosasko
|
[] |
This PR properly cleans up the memory-mapped tables that reference the cache files inside the temp dir.
Since the built-in `_finalizer` of `TemporaryDirectory` can't be modified, this PR defines its own `TemporaryDirectory` class that accepts a custom clean-up function.
Fixes #2402
| true
|
900,025,329
|
https://api.github.com/repos/huggingface/datasets/issues/2402
|
https://github.com/huggingface/datasets/issues/2402
| 2,402
|
PermissionError on Windows when using temp dir for caching
|
closed
| 0
| 2021-05-24T21:22:59
| 2021-05-26T16:39:29
| 2021-05-26T16:39:29
|
mariosasko
|
[
"bug"
] |
Currently, the following code raises a PermissionError on master if working on Windows:
```python
# run as a script or call exit() in REPL to initiate the temp dir cleanup
from datasets import *
d = load_dataset("sst", split="train", keep_in_memory=False)
set_caching_enabled(False)
d.map(lambda ex: ex)
```
Error stack trace:
```
Traceback (most recent call last):
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\weakref.py", line 624, in _exitfunc
f()
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\weakref.py", line 548, in __call__
return info.func(*info.args, **(info.kwargs or {}))
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\tempfile.py", line 799, in _cleanup
_shutil.rmtree(name)
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\shutil.py", line 500, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\shutil.py", line 395, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\shutil.py", line 393, in _rmtree_unsafe
os.unlink(fullname)
PermissionError: [WinError 5] Access is denied: 'C:\\Users\\Mario\\AppData\\Local\\Temp\\tmp20epyhmq\\cache-87a87ffb5a956e68.arrow'
```
| false
|
899,910,521
|
https://api.github.com/repos/huggingface/datasets/issues/2401
|
https://github.com/huggingface/datasets/issues/2401
| 2,401
|
load_dataset('natural_questions') fails with "ValueError: External features info don't match the dataset"
|
closed
| 4
| 2021-05-24T18:38:53
| 2021-06-09T09:07:25
| 2021-06-09T09:07:25
|
jonrbates
|
[
"bug"
] |
## Describe the bug
load_dataset('natural_questions') throws ValueError
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset('natural_questions', split='validation[:10]')
```
## Expected results
Call to load_dataset returns data.
## Actual results
```
Using custom data configuration default
Reusing dataset natural_questions (/mnt/d/huggingface/datasets/natural_questions/default/0.0.2/19bc04755018a3ad02ee74f7045cde4ba9b4162cb64450a87030ab786b123b76)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-d55ab8a8cc1c> in <module>
----> 1 datasets = load_dataset('natural_questions', split='validation[:10]', cache_dir='/mnt/d/huggingface/datasets')
~/miniconda3/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
757 )
--> 758 ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
759 if save_infos:
760 builder_instance._save_infos()
~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in as_dataset(self, split, run_post_process, ignore_verifications, in_memory)
735
736 # Create a dataset for each of the given splits
--> 737 datasets = utils.map_nested(
738 partial(
739 self._build_single_dataset,
~/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)
193 # Singleton
194 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 195 return function(data_struct)
196
197 disable_tqdm = bool(logger.getEffectiveLevel() > INFO)
~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in _build_single_dataset(self, split, run_post_process, ignore_verifications, in_memory)
762
763 # Build base dataset
--> 764 ds = self._as_dataset(
765 split=split,
766 in_memory=in_memory,
~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in _as_dataset(self, split, in_memory)
838 in_memory=in_memory,
839 )
--> 840 return Dataset(**dataset_kwargs)
841
842 def _post_process(self, dataset: Dataset, resources_paths: Dict[str, str]) -> Optional[Dataset]:
~/miniconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint)
271 assert self._fingerprint is not None, "Fingerprint can't be None in a Dataset object"
272 if self.info.features.type != inferred_features.type:
--> 273 raise ValueError(
274 "External features info don't match the dataset:\nGot\n{}\nwith type\n{}\n\nbut expected something like\n{}\nwith type\n{}".format(
275 self.info.features, self.info.features.type, inferred_features, inferred_features.type
ValueError: External features info don't match the dataset:
Got
{'id': Value(dtype='string', id=None), 'document': {'title': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None), 'html': Value(dtype='string', id=None), 'tokens': Sequence(feature={'token': Value(dtype='string', id=None), 'is_html': Value(dtype='bool', id=None)}, length=-1, id=None)}, 'question': {'text': Value(dtype='string', id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'annotations': Sequence(feature={'id': Value(dtype='string', id=None), 'long_answer': {'start_token': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'end_byte': Value(dtype='int64', id=None)}, 'short_answers': Sequence(feature={'start_token': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'end_byte': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)}, length=-1, id=None), 'yes_no_answer': ClassLabel(num_classes=2, names=['NO', 'YES'], names_file=None, id=None)}, length=-1, id=None)}
with type
struct<annotations: struct<id: list<item: string>, long_answer: list<item: struct<start_token: int64, end_token: int64, start_byte: int64, end_byte: int64>>, short_answers: list<item: struct<end_byte: list<item: int64>, end_token: list<item: int64>, start_byte: list<item: int64>, start_token: list<item: int64>, text: list<item: string>>>, yes_no_answer: list<item: int64>>, document: struct<title: string, url: string, html: string, tokens: struct<is_html: list<item: bool>, token: list<item: string>>>, id: string, question: struct<text: string, tokens: list<item: string>>>
but expected something like
{'id': Value(dtype='string', id=None), 'document': {'html': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'tokens': {'is_html': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None), 'token': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'url': Value(dtype='string', id=None)}, 'question': {'text': Value(dtype='string', id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'annotations': {'id': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'long_answer': [{'end_byte': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'start_token': Value(dtype='int64', id=None)}], 'short_answers': [{'end_byte': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'end_token': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'start_byte': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'start_token': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}], 'yes_no_answer': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}}
with type
struct<annotations: struct<id: list<item: string>, long_answer: list<item: struct<end_byte: int64, end_token: int64, start_byte: int64, start_token: int64>>, short_answers: list<item: struct<end_byte: list<item: int64>, end_token: list<item: int64>, start_byte: list<item: int64>, start_token: list<item: int64>, text: list<item: string>>>, yes_no_answer: list<item: int64>>, document: struct<html: string, title: string, tokens: struct<is_html: list<item: bool>, token: list<item: string>>, url: string>, id: string, question: struct<text: string, tokens: list<item: string>>>
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2
- Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| false
|
899,867,212
|
https://api.github.com/repos/huggingface/datasets/issues/2400
|
https://github.com/huggingface/datasets/issues/2400
| 2,400
|
Concatenate several datasets with removed columns is not working.
|
closed
| 2
| 2021-05-24T17:40:15
| 2021-05-25T05:52:01
| 2021-05-25T05:51:59
|
philschmid
|
[
"bug"
] |
## Describe the bug
You can't concatenate datasets when you removed columns before.
## Steps to reproduce the bug
```python
from datasets import load_dataset, concatenate_datasets
wikiann= load_dataset("wikiann","en")
wikiann["train"] = wikiann["train"].remove_columns(["langs","spans"])
wikiann["test"] = wikiann["test"].remove_columns(["langs","spans"])
assert wikiann["train"].features.type == wikiann["test"].features.type
concate = concatenate_datasets([wikiann["train"],wikiann["test"]])
```
## Expected results
Merged dataset
## Actual results
```python
ValueError: External features info don't match the dataset:
Got
{'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'ner_tags': Sequence(feature=ClassLabel(num_classes=7, names=['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC'], names_file=None, id=None), length=-1, id=None), 'langs': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'spans': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
with type
struct<langs: list<item: string>, ner_tags: list<item: int64>, spans: list<item: string>, tokens: list<item: string>>
but expected something like
{'ner_tags': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
with type
struct<ner_tags: list<item: int64>, tokens: list<item: string>>
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: ~1.6.2~ 1.5.0
- Platform: macos
- Python version: 3.8.5
- PyArrow version: 3.0.0
| false
|
899,853,610
|
https://api.github.com/repos/huggingface/datasets/issues/2399
|
https://github.com/huggingface/datasets/pull/2399
| 2,399
|
Add env variable for MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
|
closed
| 5
| 2021-05-24T17:19:15
| 2021-05-27T09:07:15
| 2021-05-26T16:07:54
|
albertvillanova
|
[] |
Add env variable for `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES`.
This will allow to turn off default behavior: loading in memory (and not caching) small datasets.
Fix #2387.
| true
|
899,511,837
|
https://api.github.com/repos/huggingface/datasets/issues/2398
|
https://github.com/huggingface/datasets/issues/2398
| 2,398
|
News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs
|
closed
| 1
| 2021-05-24T10:03:34
| 2022-10-05T17:13:49
| 2022-10-05T17:13:49
|
anassalamah
|
[
"bug"
] |
I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]')
# filtering out examples that are not ar-en translations but ar-hi
val_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True)
```
* I'm fairly new to using datasets so I might be doing something wrong
| false
|
899,427,378
|
https://api.github.com/repos/huggingface/datasets/issues/2397
|
https://github.com/huggingface/datasets/pull/2397
| 2,397
|
Fix number of classes in indic_glue sna.bn dataset
|
closed
| 2
| 2021-05-24T08:18:55
| 2021-05-25T16:32:16
| 2021-05-25T16:32:16
|
albertvillanova
|
[] |
As read in the [paper](https://www.aclweb.org/anthology/2020.findings-emnlp.445.pdf), Table 11.
| true
|
899,016,308
|
https://api.github.com/repos/huggingface/datasets/issues/2396
|
https://github.com/huggingface/datasets/issues/2396
| 2,396
|
strange datasets from OSCAR corpus
|
open
| 2
| 2021-05-23T13:06:02
| 2021-06-17T13:54:37
| null |
cosmeowpawlitan
|
[
"bug"
] |


From the [official site ](https://oscar-corpus.com/), the Yue Chinese dataset should have 2.2KB data.
7 training instances is obviously not a right number.
As I can read Yue Chinese, I call tell the last instance is definitely not something that would appear on Common Crawl.
And even if you don't read Yue Chinese, you can tell the first six instance are problematic.
(It is embarrassing, as the 7 training instances look exactly like something from a pornographic novel or flitting messages in a chat of a dating app)
It might not be the problem of the huggingface/datasets implementation, because when I tried to download the dataset from the official site, I found out that the zip file is corrupted.
I will try to inform the host of OSCAR corpus later.
Awy a remake about this dataset in huggingface/datasets is needed, perhaps after the host of the dataset fixes the issue.
> Hi @jerryIsHere , sorry for the late response! Sadly this is normal, the problem comes form fasttext's classifier which we used to create the original corpus. In general the classifier is not really capable of properly recognizing Yue Chineese so the file ends un being just noise from Common Crawl. Some of these problems with OSCAR were already discussed [here](https://arxiv.org/pdf/2103.12028.pdf) but we are working on explicitly documenting the problems by language on our website. In fact, could please you open an issue on [our repo](https://github.com/oscar-corpus/oscar-website/issues) as well so that we can track it?
Thanks a lot, the new post is here:
https://github.com/oscar-corpus/oscar-website/issues/11
| false
|
898,762,730
|
https://api.github.com/repos/huggingface/datasets/issues/2395
|
https://github.com/huggingface/datasets/pull/2395
| 2,395
|
`pretty_name` for dataset in YAML tags
|
closed
| 19
| 2021-05-22T09:24:45
| 2022-09-23T13:29:14
| 2022-09-23T13:29:13
|
bhavitvyamalik
|
[
"dataset contribution"
] |
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
| true
|
898,156,795
|
https://api.github.com/repos/huggingface/datasets/issues/2392
|
https://github.com/huggingface/datasets/pull/2392
| 2,392
|
Update text classification template labels in DatasetInfo __post_init__
|
closed
| 6
| 2021-05-21T15:29:41
| 2021-05-28T11:37:35
| 2021-05-28T11:37:32
|
lewtun
|
[] |
This PR implements the idea discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`. The main reason for doing so is so avoid duplicating the label definitions in both `DatasetInfo.features` and `DatasetInfo.task_templates`.
To avoid storing state in `DatasetInfo.__post_init__`, the current implementation flushes `DatasetInfo.task_templates` before the features are cast in `Dataset.prepare_for_task` (thanks to @mariosasko for this idea!).
Here is an example of the current workflow:
```python
ds1 = load_dataset("./datasets/emotion/")
# cast features and flush templates
ds2 = ds1.prepare_for_task("text-classification")
assert ds2.info.task_templates is None
```
Note that if users want to pass a `TextClassification` template to `prepare_for_task`, we require them to set `TextClassification.labels` to match the dataset's features corresponding to `label_column`:
```python
ds1 = load_dataset("./datasets/emotion/")
# TextClassification.labels is None by default => invalid template
task = TextClassification(text_column="text", label_column="label")
# Raises ValueError
ds1.prepare_for_task(task)
# Specifying the labels => valid template
task = TextClassification(text_column="text", label_column="label", labels=['anger', 'fear', 'joy', 'love', 'sadness', 'surprise'])
ds1.prepare_for_task(task)
```
This PR also adds:
* New tests + fixed some old tests that weren't testing `assertRaises` properly
* A decorator to share docstrings across common functions. This allows us to document `DatasetDict.prepare_for_task` and `Dataset.prepare_for_task` in one place.
* Fixes to avoid side-effects from in-place replacements of `DatasetInfo.task_templates` in `DatasetInfo.__post_init__`. Thanks to @lhoestq for figuring this out!
* Removal of `FeaturesWithLazyClassLabel` since we now create a new instance of `TextClassification` in `DatasetInfo.__post_init__` and avoid the side-effects first pointed out by @mariosasko
### PR Description from original WIP
Hi @yjernite and @lhoestq, here's a first stab at the suggestion discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`.
One problem I've spotted is that my current implementation introduces state into the `__post_init__`:
* When we call `load_dataset`, `DatasetInfo.features` are the "raw" features without any casting so we can access the column names by the `label_column` specified in `TextClassification`
* When we call `Dataset.prepare_for_task` we run into a problem because the `DatasetInfo.features` are first cast into the new schema which triggers a `KeyError` when we update the infos [here](https://github.com/huggingface/datasets/blob/8b2a78520828e0cc13c14a31f413a5395ef25110/src/datasets/arrow_dataset.py#L1959).
Here's an explicit example of what I mean with the stack trace appended below:
```python
from datasets import load_dataset
# this works
ds = load_dataset("emotion")
# we can verify the task template is correctly set
ds["train"].info.task_templates # returns [TextClassification(labels=('sadness', 'joy', 'love', 'anger', 'fear', 'surprise'), text_column='text', label_column='label')]
# but this fails because the _post_init__ is looking for the original column names
ds.prepare_for_task("text-classification")
```
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-54a43019b319> in <module>
----> 1 ds.prepare_for_task("text-classification")
~/git/datasets/src/datasets/dataset_dict.py in prepare_for_task(self, task)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task)
1421 dataset = self.remove_columns(columns_to_drop)
1422 dataset = dataset.rename_columns(column_mapping)
-> 1423 dataset = dataset.cast(features=template.features)
1424 return dataset
1425
~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)
970 format = self.format
971 dataset = self.with_format("arrow")
--> 972 dataset = dataset.map(
973 lambda t: t.cast(schema),
974 batched=True,
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1583
1584 if num_proc is None or num_proc == 1:
-> 1585 return self._map_single(
1586 function=function,
1587 with_indices=with_indices,
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
173 }
174 # apply actual function
--> 175 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
176 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
177 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
338 # Call actual function
339
--> 340 out = func(self, *args, **kwargs)
341
342 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
1959 if update_data:
1960 # Create new Dataset from buffer or file
-> 1961 info = self.info.copy()
1962 info.features = writer._features
1963 if buf_writer is None:
~/git/datasets/src/datasets/info.py in copy(self)
274
275 def copy(self) -> "DatasetInfo":
--> 276 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
277
278
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
174 # The reason is that Dataset.prepare_for_task calls Dataset.cast which converts the
175 # DatasetInfo.features to the new schema and thus template.label_column is no longer a valid key
--> 176 object.__setattr__(template, "labels", tuple(self.features[template.label_column].names))
177 template.label_schema["labels"] = ClassLabel(names=template.labels)
178 self.task_templates[idx] = template
KeyError: 'label'
```
What do you think? I did this a bit quickly, so maybe I'm overlooking something obvious :) One thing would be to only update the labels of the task template on load, but this seems a bit hacky IMO
| true
|
898,128,099
|
https://api.github.com/repos/huggingface/datasets/issues/2391
|
https://github.com/huggingface/datasets/issues/2391
| 2,391
|
Missing original answers in kilt-TriviaQA
|
closed
| 2
| 2021-05-21T14:57:07
| 2021-06-14T17:29:11
| 2021-06-14T17:29:11
|
PaulLerner
|
[
"bug"
] |
I previously opened an issue at https://github.com/facebookresearch/KILT/issues/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets
## Describe the bug
The `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output']['answer']` contains a list of alternative answer which are accepted for the question.
However it'd be nice to know the original answer to the question (the only fields in `output` are `'answer', 'meta', 'provenance'`)
## How to fix
It can be fixed by retrieving the original answer from the original TriviaQA (e.g. `trivia_qa['train'][0]['answer']['value']`), perhaps at the same place as here where one retrieves the questions https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md#loading-the-kilt-knowledge-source-and-task-data
cc @yjernite who previously answered to an issue about KILT and TriviaQA :)
| false
|
897,903,642
|
https://api.github.com/repos/huggingface/datasets/issues/2390
|
https://github.com/huggingface/datasets/pull/2390
| 2,390
|
Add check for task templates on dataset load
|
closed
| 1
| 2021-05-21T10:16:57
| 2021-05-21T15:49:09
| 2021-05-21T15:49:06
|
lewtun
|
[] |
This PR adds a check that the features of a dataset match the schema of each compatible task template.
| true
|
897,822,270
|
https://api.github.com/repos/huggingface/datasets/issues/2389
|
https://github.com/huggingface/datasets/pull/2389
| 2,389
|
Insert task templates for text classification
|
closed
| 6
| 2021-05-21T08:36:26
| 2021-05-28T15:28:58
| 2021-05-28T15:26:28
|
lewtun
|
[] |
This PR inserts text-classification templates for datasets with the following properties:
* Only one config
* At most two features of `(Value, ClassLabel)` type
Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR
| true
|
897,767,470
|
https://api.github.com/repos/huggingface/datasets/issues/2388
|
https://github.com/huggingface/datasets/issues/2388
| 2,388
|
Incorrect URLs for some datasets
|
closed
| 0
| 2021-05-21T07:22:35
| 2021-06-04T17:39:45
| 2021-06-04T17:39:45
|
lewtun
|
[
"bug"
] |
## Describe the bug
It seems that the URLs for the following datasets are invalid:
- [ ] `bn_hate_speech` has been renamed: https://github.com/rezacsedu/Bengali-Hate-Speech-Dataset/commit/c67ecfc4184911e12814f6b36901f9828df8a63a
- [ ] `covid_tweets_japanese` has been renamed: http://www.db.info.gifu-u.ac.jp/covid-19-twitter-dataset/
As a result we can no longer load these datasets using `load_dataset`. The simple fix is to rename the URL in the dataset script - will do this asap.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# pick one of the datasets from the list above
ds = load_dataset("bn_hate_speech")
```
## Expected results
Dataset loads without error.
## Actual results
```
Downloading: 3.36kB [00:00, 1.07MB/s]
Downloading: 2.03kB [00:00, 678kB/s]
Using custom data configuration default
Downloading and preparing dataset bn_hate_speech/default (download: 951.48 KiB, generated: 949.84 KiB, post-processed: Unknown size, total: 1.86 MiB) to /Users/lewtun/.cache/huggingface/datasets/bn_hate_speech/default/0.0.0/a2dc726e511a2177523301bcad196af05d4d8a2cff30d2769ba8aacc1f5fdb5c...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/load.py", line 744, in load_dataset
builder_instance.download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/builder.py", line 574, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/builder.py", line 630, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/lewtun/.cache/huggingface/modules/datasets_modules/datasets/bn_hate_speech/a2dc726e511a2177523301bcad196af05d4d8a2cff30d2769ba8aacc1f5fdb5c/bn_hate_speech.py", line 76, in _split_generators
train_path = dl_manager.download_and_extract(_URL)
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 287, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 195, in download
downloaded_path_or_paths = map_nested(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 195, in map_nested
return function(data_struct)
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 218, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 281, in cached_path
output_path = get_from_cache(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/rezacsedu/Bengali-Hate-Speech-Dataset/main/Bengali_%20Hate_Speech_Dataset_Subset.csv
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.8
- PyArrow version: 3.0.0
| false
|
897,566,666
|
https://api.github.com/repos/huggingface/datasets/issues/2387
|
https://github.com/huggingface/datasets/issues/2387
| 2,387
|
datasets 1.6 ignores cache
|
closed
| 13
| 2021-05-21T00:12:58
| 2021-05-26T16:07:54
| 2021-05-26T16:07:54
|
stas00
|
[
"bug"
] |
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-c6aefe81ca4e5152.arrow'}], 'validation': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-97cf4c813e6469c6.arrow'}]}`
>
> while the same command with the latest version of datasets (actually starting at `1.6.0`) gives:
> > `{'train': [], 'validation': []}`
>
I also confirm that downgrading to `datasets==1.5.0` makes things fast again - i.e. cache is used.
to reproduce:
```
USE_TF=0 python examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path gpt2 \
--dataset_name "stas/openwebtext-10k" \
--output_dir output_dir \
--overwrite_output_dir \
--do_train \
--do_eval \
--max_train_samples 1000 \
--max_eval_samples 200 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--num_train_epochs 1 \
--warmup_steps 8 \
--block_size 64 \
--fp16 \
--report_to none
```
the first time the startup is slow and some 5 tqdm bars. It shouldn't do it on consequent runs. but with `datasets>1.5.0` it rebuilds on every run.
@lhoestq
| false
|
897,560,049
|
https://api.github.com/repos/huggingface/datasets/issues/2386
|
https://github.com/huggingface/datasets/issues/2386
| 2,386
|
Accessing Arrow dataset cache_files
|
closed
| 1
| 2021-05-20T23:57:43
| 2021-05-21T19:18:03
| 2021-05-21T19:18:03
|
Mehrad0711
|
[
"bug"
] |
## Describe the bug
In datasets 1.5.0 the following code snippet would have printed the cache_files:
```
train_data = load_dataset('conll2003', split='train', cache_dir='data')
print(train_data.cache_files[0]['filename'])
```
However, in the newest release (1.6.1), it prints an empty list.
I also tried loading the dataset with `keep_in_memory=True` argument but still `cache_files` is empty.
Was wondering if this is a bug or I need to pass additional arguments so I can access the cache_files.
| false
|
897,206,823
|
https://api.github.com/repos/huggingface/datasets/issues/2385
|
https://github.com/huggingface/datasets/pull/2385
| 2,385
|
update citations
|
closed
| 0
| 2021-05-20T17:54:08
| 2021-05-21T12:38:18
| 2021-05-21T12:38:18
|
adeepH
|
[] |
To update citations for [Offenseval_dravidiain](https://huggingface.co/datasets/offenseval_dravidian)
| true
|
896,866,461
|
https://api.github.com/repos/huggingface/datasets/issues/2384
|
https://github.com/huggingface/datasets/pull/2384
| 2,384
|
Add args description to DatasetInfo
|
closed
| 2
| 2021-05-20T13:53:10
| 2021-05-22T09:26:16
| 2021-05-22T09:26:14
|
lewtun
|
[] |
Closes #2354
I am not sure what `post_processed` and `post_processing_size` correspond to, so have left them empty for now. I also took a guess at some of the other fields like `dataset_size` vs `size_in_bytes`, so might have misunderstood their meaning.
| true
|
895,779,723
|
https://api.github.com/repos/huggingface/datasets/issues/2383
|
https://github.com/huggingface/datasets/pull/2383
| 2,383
|
Improve example in rounding docs
|
closed
| 0
| 2021-05-19T18:59:23
| 2021-05-21T12:53:22
| 2021-05-21T12:36:29
|
mariosasko
|
[] |
Improves the example in the rounding subsection of the Split API docs. With this change, it should more clear what's the difference between the `closest` and the `pct1_dropremainder` rounding.
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.