html_url
stringlengths
51
51
title
stringlengths
6
280
comments
stringlengths
67
24.7k
body
stringlengths
51
36.2k
__index_level_0__
int64
1
1.17k
comment_length
int64
16
1.45k
text
stringlengths
190
38.3k
embeddings
list
https://github.com/huggingface/datasets/issues/3227
Error in `Json(datasets.ArrowBasedBuilder)` class
:open_mouth: you're right, that did it! I just put everything on a single line (my file only has a single example) and that fixed the error. Thank you so much!
## Describe the bug When a json file contains a `text` field that is larger than the block_size, the JSON dataset builder fails. ## Steps to reproduce the bug Create a folder that contains the following: ``` . β”œβ”€β”€ testdata β”‚Β Β  └── mydata.json └── test.py ``` Please download [this file](https://github.com/...
1,124
30
Error in `Json(datasets.ArrowBasedBuilder)` class ## Describe the bug When a json file contains a `text` field that is larger than the block_size, the JSON dataset builder fails. ## Steps to reproduce the bug Create a folder that contains the following: ``` . β”œβ”€β”€ testdata β”‚Β Β  └── mydata.json └── test.py ``...
[ -1.0981725454330444, -0.8957386016845703, -0.8776326775550842, 1.4748728275299072, -0.06334543228149414, -1.270065426826477, 0.1679607629776001, -1.0915708541870117, 1.7226824760437012, -0.6693846583366394, 0.2722286283969879, -1.7230216264724731, -0.03758775442838669, -0.5650842785835266,...
https://github.com/huggingface/datasets/issues/3210
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py
Hi ! Do you have some kind of proxy in your browser that gives you access to internet ? Maybe you're having this error because you don't have access to this URL from python ?
when I use python examples/pytorch/translation/run_translation.py --model_name_or_path examples/pytorch/translation/opus-mt-en-ro --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir /tmp/tst-translation --per_device_tra...
1,127
35
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py when I use python examples/pytorch/translation/run_translation.py --model_name_or_path examples/pytorch/translation/opus-mt-en-ro --do_train --do_eval --source_lang en --target_lan...
[ -1.2624717950820923, -0.8579744100570679, -0.6754687428474426, 1.47709321975708, -0.04013143479824066, -1.3197208642959595, 0.07438872009515762, -0.8613892197608948, 1.4901608228683472, -0.7189950346946716, 0.2859741151332855, -1.6582257747650146, -0.029970398172736168, -0.5216056108474731...
https://github.com/huggingface/datasets/issues/3210
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py
You don't need authentication to access those github hosted files Please check that you can access this URL from your browser and also from your terminal
when I use python examples/pytorch/translation/run_translation.py --model_name_or_path examples/pytorch/translation/opus-mt-en-ro --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir /tmp/tst-translation --per_device_tra...
1,127
26
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py when I use python examples/pytorch/translation/run_translation.py --model_name_or_path examples/pytorch/translation/opus-mt-en-ro --do_train --do_eval --source_lang en --target_lan...
[ -1.2663935422897339, -0.8743477463722229, -0.672553539276123, 1.4626193046569824, -0.03271476924419403, -1.3178071975708008, 0.09855976700782776, -0.8768602609634399, 1.476055383682251, -0.7340610027313232, 0.30316418409347534, -1.6504809856414795, -0.028396867215633392, -0.507796943187713...
https://github.com/huggingface/datasets/issues/3204
FileNotFoundError for TupleIE dataste
@mariosasko @lhoestq Could you give me an update on how to load the dataset after the fix? Thanks.
Hi, `dataset = datasets.load_dataset('tuple_ie', 'all')` returns a FileNotFound error. Is the data not available? Many thanks.
1,128
18
FileNotFoundError for TupleIE dataste Hi, `dataset = datasets.load_dataset('tuple_ie', 'all')` returns a FileNotFound error. Is the data not available? Many thanks. @mariosasko @lhoestq Could you give me an update on how to load the dataset after the fix? Thanks.
[ -1.2971563339233398, -0.96475750207901, -0.6569271683692932, 1.5173453092575073, -0.29272112250328064, -1.069276213645935, 0.22054529190063477, -1.0165016651153564, 1.7253628969192505, -0.85631263256073, 0.11518372595310211, -1.6789261102676392, -0.1664426177740097, -0.4555756747722626, ...
https://github.com/huggingface/datasets/issues/3204
FileNotFoundError for TupleIE dataste
Hi @arda-vianai, first, you can try: ```python import datasets dataset = datasets.load_dataset('tuple_ie', 'all', revision="master") ``` If this doesn't work, your version of `datasets` is missing some features that are required to run the dataset script, so install the master version with the following command...
Hi, `dataset = datasets.load_dataset('tuple_ie', 'all')` returns a FileNotFound error. Is the data not available? Many thanks.
1,128
64
FileNotFoundError for TupleIE dataste Hi, `dataset = datasets.load_dataset('tuple_ie', 'all')` returns a FileNotFound error. Is the data not available? Many thanks. Hi @arda-vianai, first, you can try: ```python import datasets dataset = datasets.load_dataset('tuple_ie', 'all', revision="master") `...
[ -1.19724702835083, -0.983649492263794, -0.7403799295425415, 1.5107733011245728, -0.24288323521614075, -1.270204782485962, 0.12432678043842316, -1.0554274320602417, 1.8113406896591187, -0.8188554644584656, 0.22508582472801208, -1.7623807191848755, -0.08189405500888824, -0.6775622963905334, ...
https://github.com/huggingface/datasets/issues/3204
FileNotFoundError for TupleIE dataste
@mariosasko Thanks, it is working now. I actually did that before but I didn't restart the kernel. I restarted it and it works now. My bad!!! Many thanks and great job! -arda
Hi, `dataset = datasets.load_dataset('tuple_ie', 'all')` returns a FileNotFound error. Is the data not available? Many thanks.
1,128
32
FileNotFoundError for TupleIE dataste Hi, `dataset = datasets.load_dataset('tuple_ie', 'all')` returns a FileNotFound error. Is the data not available? Many thanks. @mariosasko Thanks, it is working now. I actually did that before but I didn't restart the kernel. I restarted it and it works now. My bad!...
[ -1.3785902261734009, -0.9492223262786865, -0.593754768371582, 1.3794021606445312, -0.28973710536956787, -1.1257926225662231, 0.1259804517030716, -1.0615161657333374, 1.6905467510223389, -0.9296573996543884, 0.15422511100769043, -1.6453397274017334, -0.12109380215406418, -0.3946808278560638...
https://github.com/huggingface/datasets/issues/3191
Dataset viewer issue for '*compguesswhat*'
```python >>> import datasets >>> dataset = datasets.load_dataset('compguesswhat', name='compguesswhat-original',split='train', streaming=True) >>> next(iter(dataset)) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/sit...
## Dataset viewer issue for '*compguesswhat*' **Link:** https://huggingface.co/datasets/compguesswhat File not found Am I the one who added this dataset ? No
1,131
137
Dataset viewer issue for '*compguesswhat*' ## Dataset viewer issue for '*compguesswhat*' **Link:** https://huggingface.co/datasets/compguesswhat File not found Am I the one who added this dataset ? No ```python >>> import datasets >>> dataset = datasets.load_dataset('compguesswhat', name='compguesswhat...
[ -1.1956356763839722, -0.9512522220611572, -0.7212889194488525, 1.3971904516220093, -0.15875674784183502, -1.2568542957305908, 0.10408779233694077, -1.0211855173110962, 1.591296911239624, -0.7029871344566345, 0.13899047672748566, -1.7139205932617188, -0.14498084783554077, -0.512530207633972...
https://github.com/huggingface/datasets/issues/3191
Dataset viewer issue for '*compguesswhat*'
There is an issue with the URLs of their data files: https://www.dropbox.com/s/l0nc13udml6vs0w/compguesswhat-original.zip?dl=1 > Dropbox Error: That didn't work for some reason Error reported to their repo: - https://github.com/CompGuessWhat/compguesswhat.github.io/issues/1
## Dataset viewer issue for '*compguesswhat*' **Link:** https://huggingface.co/datasets/compguesswhat File not found Am I the one who added this dataset ? No
1,131
28
Dataset viewer issue for '*compguesswhat*' ## Dataset viewer issue for '*compguesswhat*' **Link:** https://huggingface.co/datasets/compguesswhat File not found Am I the one who added this dataset ? No There is an issue with the URLs of their data files: https://www.dropbox.com/s/l0nc13udml6vs0w/compguess...
[ -1.2021222114562988, -1.0894232988357544, -0.8547630906105042, 1.4378511905670166, -0.18267469108104706, -1.2248207330703735, 0.11819412559270859, -0.9646920561790466, 1.5706435441970825, -0.5758435726165771, 0.24780148267745972, -1.6743496656417847, -0.0555340051651001, -0.637312948703765...
https://github.com/huggingface/datasets/issues/3190
combination of shuffle and filter results in a bug
Hi ! There was a regression in `datasets` 1.12 that introduced this bug. It has been fixed in #3019 in 1.13 Can you try to update `datasets` and try again ?
## Describe the bug Hi, I would like to shuffle a dataset, then filter it based on each existing label. however, the combination of `filter`, `shuffle` seems to results in a bug. In the minimal example below, as you see in the filtered results, the filtered labels are not unique, meaning filter has not worked. Any su...
1,132
31
combination of shuffle and filter results in a bug ## Describe the bug Hi, I would like to shuffle a dataset, then filter it based on each existing label. however, the combination of `filter`, `shuffle` seems to results in a bug. In the minimal example below, as you see in the filtered results, the filtered labels ...
[ -1.1492472887039185, -0.8812903761863708, -0.7271469235420227, 1.4592254161834717, -0.15578298270702362, -1.1553094387054443, 0.17843475937843323, -1.1352170705795288, 1.7802836894989014, -0.8389032483100891, 0.24476273357868195, -1.7247135639190674, 0.05199643224477768, -0.604841351509094...
https://github.com/huggingface/datasets/issues/3189
conll2003 incorrect label explanation
Hi @BramVanroy, since these fields are of type `ClassLabel` (you can check this with `dset.features`), you can inspect the possible values with: ```python dset.features[field_name].feature.names # .feature because it's a sequence of labels ``` and to find the mapping between names and integers, use: ```pyth...
In the [conll2003](https://huggingface.co/datasets/conll2003#data-fields) README, the labels are described as follows > - `id`: a `string` feature. > - `tokens`: a `list` of `string` features. > - `pos_tags`: a `list` of classification labels, with possible values including `"` (0), `''` (1), `#` (2), `$` (3), `(`...
1,133
63
conll2003 incorrect label explanation In the [conll2003](https://huggingface.co/datasets/conll2003#data-fields) README, the labels are described as follows > - `id`: a `string` feature. > - `tokens`: a `list` of `string` features. > - `pos_tags`: a `list` of classification labels, with possible values including ...
[ -1.0751079320907593, -0.7856404781341553, -0.7746471166610718, 1.5610393285751343, -0.15691281855106354, -1.3770769834518433, 0.29570773243904114, -1.082038402557373, 1.828553557395935, -0.8348392248153687, 0.44766634702682495, -1.7142179012298584, 0.10602924972772598, -0.7290433645248413,...
https://github.com/huggingface/datasets/issues/3188
conll2002 issues
Hi ! Thanks for reporting :) This is related to https://github.com/huggingface/datasets/issues/2742, I'm working on it. It should fix the viewer for around 80 datasets.
**Link:** https://huggingface.co/datasets/conll2002 The dataset viewer throws a server error when trying to preview the dataset. ``` Message: Extraction protocol 'train' for file at 'https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/esp.train' is not implemented yet ``` I...
1,134
24
conll2002 issues **Link:** https://huggingface.co/datasets/conll2002 The dataset viewer throws a server error when trying to preview the dataset. ``` Message: Extraction protocol 'train' for file at 'https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/esp.train' is not implem...
[ -1.1928236484527588, -0.9418654441833496, -0.8794344067573547, 1.5500458478927612, -0.178982675075531, -1.2934726476669312, 0.06981675326824188, -1.004502534866333, 1.6279656887054443, -0.6632462739944458, 0.28508618474006653, -1.6786152124404907, -0.04737503454089165, -0.5454598069190979,...
https://github.com/huggingface/datasets/issues/3188
conll2002 issues
Ah, hadn't seen that sorry. The scrambled "point of contact" is a separate issue though, I think.
**Link:** https://huggingface.co/datasets/conll2002 The dataset viewer throws a server error when trying to preview the dataset. ``` Message: Extraction protocol 'train' for file at 'https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/esp.train' is not implemented yet ``` I...
1,134
17
conll2002 issues **Link:** https://huggingface.co/datasets/conll2002 The dataset viewer throws a server error when trying to preview the dataset. ``` Message: Extraction protocol 'train' for file at 'https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/esp.train' is not implem...
[ -1.185798168182373, -0.9098573923110962, -0.9363990426063538, 1.500529170036316, -0.21281874179840088, -1.273804783821106, 0.06552891433238983, -1.0319269895553589, 1.628920078277588, -0.6639599800109863, 0.25130024552345276, -1.6811925172805786, -0.09912671148777008, -0.5911166071891785, ...
https://github.com/huggingface/datasets/issues/3186
Dataset viewer for nli_tr
It's an issue with the streaming mode: ```python >>> import datasets >>> dataset = datasets.load_dataset('nli_tr', name='snli_tr',split='test', streaming=True) >>> next(iter(dataset)) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.ven...
## Dataset viewer issue for '*nli_tr*' **Link:** https://huggingface.co/datasets/nli_tr Hello, Thank you for the new dataset preview feature that will help the users to view the datasets online. We just noticed that the dataset viewer widget in the `nli_tr` dataset shows the error below. The error must be d...
1,135
119
Dataset viewer for nli_tr ## Dataset viewer issue for '*nli_tr*' **Link:** https://huggingface.co/datasets/nli_tr Hello, Thank you for the new dataset preview feature that will help the users to view the datasets online. We just noticed that the dataset viewer widget in the `nli_tr` dataset shows the erro...
[ -1.2394853830337524, -0.7940859198570251, -0.6487321853637695, 1.3459410667419434, -0.00405103201046586, -1.388659119606018, 0.03606990724802017, -0.9163230657577515, 1.6537195444107056, -0.8394179344177246, 0.27051088213920593, -1.7367701530456543, -0.005572214722633362, -0.54047006368637...
https://github.com/huggingface/datasets/issues/3186
Dataset viewer for nli_tr
Apparently there is an issue with the data source URLs: Server Not Found - https://tabilab.cmpe.boun.edu.tr/datasets/nli_datasets/snli_tr_1.0.zip We are contacting the authors to ask them: @e-budur you are one of the authors: are you aware of the issue with the URLs of your data ?
## Dataset viewer issue for '*nli_tr*' **Link:** https://huggingface.co/datasets/nli_tr Hello, Thank you for the new dataset preview feature that will help the users to view the datasets online. We just noticed that the dataset viewer widget in the `nli_tr` dataset shows the error below. The error must be d...
1,135
43
Dataset viewer for nli_tr ## Dataset viewer issue for '*nli_tr*' **Link:** https://huggingface.co/datasets/nli_tr Hello, Thank you for the new dataset preview feature that will help the users to view the datasets online. We just noticed that the dataset viewer widget in the `nli_tr` dataset shows the erro...
[ -1.215806484222412, -0.7645496129989624, -0.6968789100646973, 1.3220577239990234, 0.018467221409082413, -1.4032725095748901, 0.04846670851111412, -0.9023398160934448, 1.6639496088027954, -0.8551952838897705, 0.32544252276420593, -1.7192356586456299, 0.07369410991668701, -0.5765167474746704...
https://github.com/huggingface/datasets/issues/3185
7z dataset preview not implemented?
It's a bug in the dataset viewer: the dataset cannot be downloaded in streaming mode, but since the dataset is relatively small, the dataset viewer should have fallback to normal mode. Working on a fix.
## Dataset viewer issue for dataset 'samsum' **Link:** https://huggingface.co/datasets/samsum Server Error Status code: 400 Exception: NotImplementedError Message: Extraction protocol '7z' for file at 'https://arxiv.org/src/1911.12237v2/anc/corpus.7z' is not implemented yet
1,136
35
7z dataset preview not implemented? ## Dataset viewer issue for dataset 'samsum' **Link:** https://huggingface.co/datasets/samsum Server Error Status code: 400 Exception: NotImplementedError Message: Extraction protocol '7z' for file at 'https://arxiv.org/src/1911.12237v2/anc/corpus.7z' is not im...
[ -1.2074687480926514, -0.9131606221199036, -0.8425233960151672, 1.3861175775527954, -0.22318238019943237, -1.1567606925964355, 0.05182310938835144, -0.9353853464126587, 1.632861614227295, -0.5305943489074707, 0.22784851491451263, -1.729848861694336, -0.18259796500205994, -0.4667822718620300...
https://github.com/huggingface/datasets/issues/3181
`None` converted to `"None"` when loading a dataset
Hi @eladsegal, thanks for reporting. @mariosasko I saw you are already working on this, but maybe my comment will be useful to you. All values are casted to their corresponding feature type (including `None` values). For example if the feature type is `Value("bool")`, `None` is casted to `False`. It is true th...
## Describe the bug When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`. ## Steps to reproduce the bug ```python from datasets import load_dataset qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists") print(qasper[60]["full_text...
1,137
65
`None` converted to `"None"` when loading a dataset ## Describe the bug When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`. ## Steps to reproduce the bug ```python from datasets import load_dataset qasper = load_dataset("qasper", split="train", download_mode...
[ -1.0999916791915894, -0.8553860187530518, -0.7405563592910767, 1.594931960105896, -0.11760137230157852, -1.2051451206207275, 0.15265530347824097, -1.0315252542495728, 1.8005903959274292, -0.8054513931274414, 0.3664199113845825, -1.6940126419067383, 0.05141976475715637, -0.62486332654953, ...
https://github.com/huggingface/datasets/issues/3181
`None` converted to `"None"` when loading a dataset
Thanks for reporting. This is actually a breaking change that I think can cause issues when users preprocess their data. String columns used to be nullable. Maybe we can correct https://github.com/huggingface/datasets/pull/3158 to keep the None values and avoid this breaking change ? EDIT: the other types (bool, ...
## Describe the bug When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`. ## Steps to reproduce the bug ```python from datasets import load_dataset qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists") print(qasper[60]["full_text...
1,137
54
`None` converted to `"None"` when loading a dataset ## Describe the bug When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`. ## Steps to reproduce the bug ```python from datasets import load_dataset qasper = load_dataset("qasper", split="train", download_mode...
[ -1.0911318063735962, -0.882531464099884, -0.7324954867362976, 1.550197958946228, -0.0904211550951004, -1.2375785112380981, 0.11741011589765549, -1.0219576358795166, 1.7756493091583252, -0.7770442962646484, 0.35934382677078247, -1.679020643234253, 0.03219295293092728, -0.5986884236335754, ...
https://github.com/huggingface/datasets/issues/3181
`None` converted to `"None"` when loading a dataset
So what would be the best way to handle a feature that can have a null value in some of the instances? So far I used `None`. Using the empty string won't be a good option, as it can be an actual value in the data and is not the same as not having a value at all.
## Describe the bug When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`. ## Steps to reproduce the bug ```python from datasets import load_dataset qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists") print(qasper[60]["full_text...
1,137
58
`None` converted to `"None"` when loading a dataset ## Describe the bug When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`. ## Steps to reproduce the bug ```python from datasets import load_dataset qasper = load_dataset("qasper", split="train", download_mode...
[ -1.059288740158081, -0.8610916137695312, -0.7823936939239502, 1.555702567100525, -0.11233387142419815, -1.2522691488265991, 0.1433178037405014, -0.9937381148338318, 1.7844302654266357, -0.8291566371917725, 0.36793792247772217, -1.6656520366668701, 0.0578678622841835, -0.6378370523452759, ...
https://github.com/huggingface/datasets/issues/3181
`None` converted to `"None"` when loading a dataset
Hi @eladsegal, Use `None`. As @albertvillanova correctly pointed out, this change in conversion was introduced (by mistake) in #3158. To avoid it, install the earlier revision with: ``` pip install git+https://github.com/huggingface/datasets.git@8107844ec0e7add005db0585c772ee20adc01a5e ``` I'm making all the f...
## Describe the bug When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`. ## Steps to reproduce the bug ```python from datasets import load_dataset qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists") print(qasper[60]["full_text...
1,137
52
`None` converted to `"None"` when loading a dataset ## Describe the bug When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`. ## Steps to reproduce the bug ```python from datasets import load_dataset qasper = load_dataset("qasper", split="train", download_mode...
[ -1.1104297637939453, -0.8794372081756592, -0.7295841574668884, 1.5458197593688965, -0.11839979141950607, -1.2499629259109497, 0.1818075180053711, -1.0077917575836182, 1.8026396036148071, -0.7718185186386108, 0.35351356863975525, -1.6729868650436401, 0.03571954742074013, -0.6544751524925232...
https://github.com/huggingface/datasets/issues/3181
`None` converted to `"None"` when loading a dataset
https://github.com/huggingface/datasets/pull/3195 fixed it, we'll do a new release soon :) For now feel free to install `datasets` from the master branch
## Describe the bug When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`. ## Steps to reproduce the bug ```python from datasets import load_dataset qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists") print(qasper[60]["full_text...
1,137
21
`None` converted to `"None"` when loading a dataset ## Describe the bug When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`. ## Steps to reproduce the bug ```python from datasets import load_dataset qasper = load_dataset("qasper", split="train", download_mode...
[ -1.0740617513656616, -0.8484784960746765, -0.7379323244094849, 1.5833247900009155, -0.06813623011112213, -1.2702780961990356, 0.12588250637054443, -0.9717298746109009, 1.7830504179000854, -0.7928630709648132, 0.3547161817550659, -1.7074229717254639, 0.028199702501296997, -0.624476075172424...
https://github.com/huggingface/datasets/issues/3181
`None` converted to `"None"` when loading a dataset
Thanks, but unfortunately looks like it isn't fixed yet 😒 [notebook for 1.14.0](https://colab.research.google.com/drive/1SV3sFXPJMWSQgbm4pr9Y1Q8OJ4JYKcDo?usp=sharing) [notebook for master](https://colab.research.google.com/drive/145wDpuO74MmsuI0SVLcI1IswG6aHpyhi?usp=sharing)
## Describe the bug When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`. ## Steps to reproduce the bug ```python from datasets import load_dataset qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists") print(qasper[60]["full_text...
1,137
16
`None` converted to `"None"` when loading a dataset ## Describe the bug When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`. ## Steps to reproduce the bug ```python from datasets import load_dataset qasper = load_dataset("qasper", split="train", download_mode...
[ -1.117220401763916, -0.8870999813079834, -0.7563175559043884, 1.5397804975509644, -0.10925611853599548, -1.27373206615448, 0.13632306456565857, -0.9916651248931885, 1.7711703777313232, -0.8115760684013367, 0.348585844039917, -1.676233172416687, 0.06569804251194, -0.5813008546829224, -0.7...
https://github.com/huggingface/datasets/issues/3181
`None` converted to `"None"` when loading a dataset
Oh, sorry. I deleted the fix by accident when I was resolving a merge conflict. Let me fix this real quick.
## Describe the bug When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`. ## Steps to reproduce the bug ```python from datasets import load_dataset qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists") print(qasper[60]["full_text...
1,137
21
`None` converted to `"None"` when loading a dataset ## Describe the bug When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`. ## Steps to reproduce the bug ```python from datasets import load_dataset qasper = load_dataset("qasper", split="train", download_mode...
[ -1.0886856317520142, -0.8367547988891602, -0.7209291458129883, 1.547113060951233, -0.11470504850149155, -1.234330177307129, 0.16596844792366028, -1.0085505247116089, 1.7746423482894897, -0.8325297832489014, 0.35446053743362427, -1.6905018091201782, 0.04861191660165787, -0.6268988251686096,...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
After some digging, I found that this is caused by `dill` and using `recurse=True)` when trying to dump the object. The problem also occurs without multiprocessing. I can only find [the following information](https://dill.readthedocs.io/en/latest/dill.html#dill._dill.dumps) about this: > If recurse=True, then object...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
108
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
Hi ! Thanks for reporting Yes `recurse=True` is necessary to be able to hash all the objects that are passed to the `map` function EDIT: hopefully this object can be serializable soon, but otherwise we can consider adding more control to the user on how to hash objects that are not serializable (as mentioned in h...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
56
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
I submitted a PR to spacy that should fix this issue (linked above). I'll leave this open until that PR is merged.
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
22
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
@lhoestq After some testing I find that even with the updated spaCy, no cache files are used. I do not get any warnings though, but I can see that map is run every time I run the code. Do you have thoughts about why? If you want to try the tests below, make sure to install spaCy from [here](https://github.com/BramVanro...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
207
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
It looks like every time you load `en_core_web_sm` you get a different python object: ```python import spacy from datasets.fingerprint import Hasher nlp1 = spacy.load("en_core_web_sm") nlp2 = spacy.load("en_core_web_sm") Hasher.hash(nlp1), Hasher.hash(nlp2) # ('f6196a33882fea3b', 'a4c676a071f266ff') ``` Here...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
109
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
Thanks for searching! I went looking, and found that this is an implementation detail of thinc https://github.com/explosion/thinc/blob/68691e303ae68cae4bc803299016f1fc064328bf/thinc/model.py#L96-L98 Presumably (?) exactly to distinguish between different parts in memory when multiple models are loaded. Do not thi...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
119
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
It can be even simpler to hash the bytes of the pipeline instead ```python nlp1.to_bytes() == nlp2.to_bytes() # True ``` IMO we should integrate the custom hashing for spacy models into `datasets` (we use a custom Pickler for that). What could be done on Spacy's side instead (if they think it's nice to have) is...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
114
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
I do not quite understand what you mean. as far as I can tell, using `to_bytes` does a pickle dump behind the scene (with `srsly`), recursively using `to_bytes` on the required objects. Therefore, the result of `to_bytes` is a deterministic pickle dump AFAICT. Or do you mean that you wish that using your own pickler an...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
271
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
Interestingly, my PR does not solve the issue discussed above. The `tokenize` function hash is different on every run, because for some reason `nlp.__call__` has a different hash every time. The issue therefore seems to run much deeper than I thought. If you have any ideas, I'm all ears. ```shell git clone https://...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
151
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
Hi ! I just answered in your PR :) In order for your custom hashing to be used for nested objects, you must integrate it into our recursive pickler that we use for hashing.
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
34
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
I don't quite understand the design constraints of `datasets` or the script that you're running, but my usual advice is to avoid using pickle unless you _absolutely_ have to. So for instance instead of doing your `partial` over the `nlp` object itself, can you just pass the string `en_core_web_sm` in? This will mean ca...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
177
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
Hi Matthew, thanks for chiming in! We are currently implementing exactly what you suggest: `to_bytes()` as a default before pickling - but we may prefer `to_dict` to avoid double dumping. `datasets` uses pickle dumps (actually dill) to get unique representations of processing steps (a "fingerprint" or hash). So it n...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
275
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
Is there a workaround for this? maybe by explicitly requesting datasets to cache the result of `.map()`?
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
17
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
Hi ! If your function is not picklable, then the fingerprint of the resulting dataset can't be computed. The fingerprint is a hash that is used by the cache to reload previously computed datasets: the dataset file is named `cache-<fingerprint>.arrow` in your dataset's cache directory. As a workaround you can set the...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
102
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
I've been having an issue that might be related to this when trying to pre-tokenize a corpus and caching it for using it later in the pre-training of a RoBERTa model. I always get the following warning: ``` Dataset text downloaded and prepared to /gpfswork/rech/project/user/.cache/hf-datasets/text/default-185088602...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
167
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
> Hi ! If your function is not picklable, then the fingerprint of the resulting dataset can't be computed. The fingerprint is a hash that is used by the cache to reload previously computed datasets: the dataset file is named `cache-<fingerprint>.arrow` in your dataset's cache directory. > > As a workaround you can s...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
143
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
> I've been having an issue that might be related to this when trying to pre-tokenize a corpus and caching it for using it later in the pre-training of a RoBERTa model. I always get the following warning: > > ``` > Dataset text downloaded and prepared to /gpfswork/rech/project/user/.cache/hf-datasets/text/default-1...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
188
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
I see this has just been closed - it seems quite relevant to another tokenizer I have been trying to use, the `vinai/phobert` family of tokenizers https://huggingface.co/vinai/phobert-base https://huggingface.co/vinai/phobert-large I ran into an issue where a large dataset took several hours to tokenize, the pro...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
105
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
In your case it looks like the job failed before caching the data - maybe one of the processes crashed
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
20
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
Interesting. Thanks for the observation. Any suggestions on how to start tracking that down? Perhaps run it singlethreaded and see if it crashes?
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
23
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
You can monitor your RAM and disk space in case a process dies from OOM or disk full, and when it hangs you can check how many processes are running. IIRC there are other start methods for multiprocessing in python that may show an error message if a process dies. Running on a single process can also help debugging ...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
61
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
Hi @tung-msol could you open a new issue and share the error you got and the map function you used ?
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
1,139
21
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -1.2768163681030273, -0.8986586928367615, -0.6862420439720154, 1.4672660827636719, -0.18224464356899261, -1.15681791305542, 0.19105981290340424, -1.0926718711853027, 1.6184024810791016, -0.8082042932510376, 0.4367443025112152, -1.5654189586639404, 0.07511787861585617, -0.5814680457115173, ...
https://github.com/huggingface/datasets/issues/3177
More control over TQDM when using map/filter with multiple processes
Hi, It's hard to provide an API that would cover all use-cases with tqdm in this project. However, you can make it work by defining a custom decorator (a bit hacky tho) as follows: ```python import datasets def progress_only_on_rank_0(func): def wrapper(*args, **kwargs): rank = kwargs.get("rank...
It would help with the clutter in my terminal if tqdm is only shown for rank 0 when using `num_proces>0` in the map and filter methods of datasets. ```python dataset.map(lambda examples: tokenize(examples["text"]), batched=True, num_proc=6) ``` The above snippet leads to a lot of TQDM bars and depending on your...
1,140
129
More control over TQDM when using map/filter with multiple processes It would help with the clutter in my terminal if tqdm is only shown for rank 0 when using `num_proces>0` in the map and filter methods of datasets. ```python dataset.map(lambda examples: tokenize(examples["text"]), batched=True, num_proc=6) ```...
[ -1.1732048988342285, -0.8198485374450684, -0.8211680054664612, 1.4263813495635986, -0.10249532014131546, -1.2223129272460938, 0.09469931572675705, -1.1753627061843872, 1.492700219154358, -0.8106050491333008, 0.3664986193180084, -1.693397045135498, 0.02309465780854225, -0.6575983166694641, ...
https://github.com/huggingface/datasets/issues/3177
More control over TQDM when using map/filter with multiple processes
Inspiration may be found at `transformers`. https://github.com/huggingface/transformers/blob/4a394cf53f05e73ab9bbb4b179a40236a5ffe45a/src/transformers/trainer.py#L1231-L1233 To get unique IDs for each worker, see https://stackoverflow.com/a/10192611/1150683
It would help with the clutter in my terminal if tqdm is only shown for rank 0 when using `num_proces>0` in the map and filter methods of datasets. ```python dataset.map(lambda examples: tokenize(examples["text"]), batched=True, num_proc=6) ``` The above snippet leads to a lot of TQDM bars and depending on your...
1,140
16
More control over TQDM when using map/filter with multiple processes It would help with the clutter in my terminal if tqdm is only shown for rank 0 when using `num_proces>0` in the map and filter methods of datasets. ```python dataset.map(lambda examples: tokenize(examples["text"]), batched=True, num_proc=6) ```...
[ -1.096755862236023, -0.8034772872924805, -0.9246952533721924, 1.4608101844787598, -0.012608489021658897, -1.2234196662902832, 0.08287712931632996, -1.133273959159851, 1.5177874565124512, -0.7048189640045166, 0.32350146770477295, -1.7377831935882568, 0.010070355609059334, -0.633258342742919...
https://github.com/huggingface/datasets/issues/3172
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
NB: even if the error is raised, the dataset is successfully cached. So restarting the script after every `map()` allows to ultimately run the whole preprocessing. But this prevents to realistically run the code over multiple nodes.
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent ...
1,141
37
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1` ## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. Th...
[ -1.1256572008132935, -0.8851135969161987, -0.6827611327171326, 1.450364589691162, -0.18745926022529602, -1.290369987487793, 0.28105470538139343, -1.0788518190383911, 1.776857614517212, -0.7800658345222473, 0.3009101152420044, -1.6550285816192627, 0.0002498980611562729, -0.5880305767059326,...
https://github.com/huggingface/datasets/issues/3172
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
Hi, It's not easy to debug the problem without the script. I may be wrong since I'm not very familiar with PyTorch Lightning, but shouldn't you preprocess the data in the `prepare_data` function of `LightningDataModule` and not in the `setup` function. As you can't modify the module state in `prepare_data` (accordi...
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent ...
1,141
99
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1` ## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. Th...
[ -1.1256572008132935, -0.8851135969161987, -0.6827611327171326, 1.450364589691162, -0.18745926022529602, -1.290369987487793, 0.28105470538139343, -1.0788518190383911, 1.776857614517212, -0.7800658345222473, 0.3009101152420044, -1.6550285816192627, 0.0002498980611562729, -0.5880305767059326,...
https://github.com/huggingface/datasets/issues/3172
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
Hi @mariosasko, thank you for the hint, that helped me to move forward with that issue. I did a major refactoring of my project to disentangle my `LightningDataModule` and `Dataset`. Just FYI, it looks like: ```python class Builder(): def __call__() -> DatasetDict: # load and preprocess the data ...
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent ...
1,141
170
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1` ## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. Th...
[ -1.1256572008132935, -0.8851135969161987, -0.6827611327171326, 1.450364589691162, -0.18745926022529602, -1.290369987487793, 0.28105470538139343, -1.0788518190383911, 1.776857614517212, -0.7800658345222473, 0.3009101152420044, -1.6550285816192627, 0.0002498980611562729, -0.5880305767059326,...
https://github.com/huggingface/datasets/issues/3172
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
Please allow me to revive this discussion, as I have an extremely similar issue. Instead of an error, my datasets functions simply aren't caching properly. My setup is almost the same as yours, with hydra to configure my experiment parameters. @vlievin Could you confirm if your code correctly loads the cache? If so,...
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent ...
1,141
85
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1` ## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. Th...
[ -1.1256572008132935, -0.8851135969161987, -0.6827611327171326, 1.450364589691162, -0.18745926022529602, -1.290369987487793, 0.28105470538139343, -1.0788518190383911, 1.776857614517212, -0.7800658345222473, 0.3009101152420044, -1.6550285816192627, 0.0002498980611562729, -0.5880305767059326,...
https://github.com/huggingface/datasets/issues/3172
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
Hello @mariomeissner, very sorry for the late reply, I hope you have found a solution to your problem! I don't have public code at the moment. I have not experienced any other issue with hydra, even if I don't understand why changing the location of the definition of `run()` fixed the problem. Overall, I don't h...
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent ...
1,141
74
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1` ## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. Th...
[ -1.1256572008132935, -0.8851135969161987, -0.6827611327171326, 1.450364589691162, -0.18745926022529602, -1.290369987487793, 0.28105470538139343, -1.0788518190383911, 1.776857614517212, -0.7800658345222473, 0.3009101152420044, -1.6550285816192627, 0.0002498980611562729, -0.5880305767059326,...
https://github.com/huggingface/datasets/issues/3172
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
I solved my issue by turning the map callable into a class static method, like they do in `lightning-transformers`. Very strange...
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent ...
1,141
21
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1` ## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. Th...
[ -1.1256572008132935, -0.8851135969161987, -0.6827611327171326, 1.450364589691162, -0.18745926022529602, -1.290369987487793, 0.28105470538139343, -1.0788518190383911, 1.776857614517212, -0.7800658345222473, 0.3009101152420044, -1.6550285816192627, 0.0002498980611562729, -0.5880305767059326,...
https://github.com/huggingface/datasets/issues/3172
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
I have this issue with datasets v2.5.2 with Python 3.8.10 on Ubuntu 20.04.4 LTS. It does not occur when num_proc=1. When num_proc>1, it intermittently occurs and will cause process to hang. As previously mentioned, it occurs even when datasets have been previously cached. I have tried wrapping logic in a static cla...
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent ...
1,141
59
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1` ## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. Th...
[ -1.1256572008132935, -0.8851135969161987, -0.6827611327171326, 1.450364589691162, -0.18745926022529602, -1.290369987487793, 0.28105470538139343, -1.0788518190383911, 1.776857614517212, -0.7800658345222473, 0.3009101152420044, -1.6550285816192627, 0.0002498980611562729, -0.5880305767059326,...
https://github.com/huggingface/datasets/issues/3171
Raise exceptions instead of using assertions for control flow
Adding the remaining tasks for this issue to help new code contributors. $ cd src/datasets && ack assert -lc - [x] commands/convert.py:1 - [x] arrow_reader.py:3 - [x] load.py:7 - [x] utils/py_utils.py:2 - [x] features/features.py:9 - [x] arrow_writer.py:7 - [x] search.py:6 - [x] table.py:1 - [x] metric.py:...
Motivated by https://github.com/huggingface/transformers/issues/12789 in Transformers, one welcoming change would be replacing assertions with proper exceptions. The only type of assertions we should keep are those used as sanity checks. Currently, there is a total of 87 files with the `assert` statements (located u...
1,142
61
Raise exceptions instead of using assertions for control flow Motivated by https://github.com/huggingface/transformers/issues/12789 in Transformers, one welcoming change would be replacing assertions with proper exceptions. The only type of assertions we should keep are those used as sanity checks. Currently, ther...
[ -1.3465375900268555, -0.7987534999847412, -0.8411737084388733, 1.452647089958191, -0.16265514492988586, -1.2206523418426514, 0.07383418083190918, -1.0279417037963867, 1.6717920303344727, -0.7665635943412781, 0.35935431718826294, -1.6804202795028687, 0.10921455174684525, -0.5468869209289551...
https://github.com/huggingface/datasets/issues/3171
Raise exceptions instead of using assertions for control flow
Hi all, I am interested in taking up `fingerprint.py`, `search.py`, `arrow_writer.py` and `metric.py`. Will raise a PR soon!
Motivated by https://github.com/huggingface/transformers/issues/12789 in Transformers, one welcoming change would be replacing assertions with proper exceptions. The only type of assertions we should keep are those used as sanity checks. Currently, there is a total of 87 files with the `assert` statements (located u...
1,142
18
Raise exceptions instead of using assertions for control flow Motivated by https://github.com/huggingface/transformers/issues/12789 in Transformers, one welcoming change would be replacing assertions with proper exceptions. The only type of assertions we should keep are those used as sanity checks. Currently, ther...
[ -1.1491758823394775, -0.9067763090133667, -0.8584375381469727, 1.4407182931900024, -0.24954241514205933, -1.3536908626556396, 0.22718903422355652, -1.0899450778961182, 1.7442253828048706, -0.8560987710952759, 0.4315868020057678, -1.6795883178710938, 0.07003068923950195, -0.6154617667198181...
https://github.com/huggingface/datasets/issues/3168
OpenSLR/83 is empty
Hi @tyrius02, thanks for reporting. I see you self-assigned this issue: are you working on this?
## Describe the bug As the summary says, openslr / SLR83 / train is empty. The dataset returned after loading indicates there are **zero** rows. The correct number should be **17877**. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('openslr', 'SLR83') ``` ## Expected resul...
1,143
16
OpenSLR/83 is empty ## Describe the bug As the summary says, openslr / SLR83 / train is empty. The dataset returned after loading indicates there are **zero** rows. The correct number should be **17877**. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('openslr', 'SLR83') ``...
[ -1.1639569997787476, -0.9612900614738464, -0.7580885887145996, 1.4381530284881592, -0.2405116707086563, -1.2051752805709839, 0.1510688066482544, -0.9826123714447021, 1.768053650856018, -0.7353419661521912, 0.30043044686317444, -1.7498258352279663, -0.009621880017220974, -0.6658338904380798...
https://github.com/huggingface/datasets/issues/3168
OpenSLR/83 is empty
@albertvillanova Yes. Figured I introduced the broken config, I should fix it too. I've got it working, but I'm struggling with one of the tests. I've started a PR so I/we can work through it.
## Describe the bug As the summary says, openslr / SLR83 / train is empty. The dataset returned after loading indicates there are **zero** rows. The correct number should be **17877**. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('openslr', 'SLR83') ``` ## Expected resul...
1,143
35
OpenSLR/83 is empty ## Describe the bug As the summary says, openslr / SLR83 / train is empty. The dataset returned after loading indicates there are **zero** rows. The correct number should be **17877**. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('openslr', 'SLR83') ``...
[ -1.1781080961227417, -0.9597163200378418, -0.7798928618431091, 1.426313042640686, -0.23168079555034637, -1.200372338294983, 0.1673726588487625, -0.968491792678833, 1.743032693862915, -0.7391090393066406, 0.31458520889282227, -1.7466412782669067, -0.009990445338189602, -0.6543440818786621, ...
https://github.com/huggingface/datasets/issues/3167
bookcorpusopen no longer works
I tried with the latest changes from #3280 on google colab and it worked fine :) We'll do a new release soon, in the meantime you can use the updated version with: ```python load_dataset("bookcorpusopen", revision="master") ```
## Describe the bug When using the latest version of datasets (1.14.0), I cannot use the `bookcorpusopen` dataset. The process blocks always around `9924 examples [00:06, 1439.61 examples/s]` when preparing the dataset. I also noticed that after half an hour the process is automatically killed because of the RAM usa...
1,144
36
bookcorpusopen no longer works ## Describe the bug When using the latest version of datasets (1.14.0), I cannot use the `bookcorpusopen` dataset. The process blocks always around `9924 examples [00:06, 1439.61 examples/s]` when preparing the dataset. I also noticed that after half an hour the process is automatica...
[ -1.1713382005691528, -0.8597288131713867, -0.7467466592788696, 1.462351679801941, -0.11907734721899033, -1.257006049156189, 0.1193060353398323, -1.015738844871521, 1.7440966367721558, -0.8353912234306335, 0.29046764969825745, -1.6421194076538086, 0.02283443510532379, -0.5554172992706299, ...
https://github.com/huggingface/datasets/issues/3164
Add raw data files to the Hub with GitHub LFS for canonical dataset
Hi @zlucia, I would actually suggest hosting the dataset as a huggingface.co-hosted dataset. The only difference with a "canonical"/legacy dataset is that it's nested under an organization (here `stanford` or `stanfordnlp` for instance – completely up to you) but then you can upload your data using git-lfs (unlike "...
I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term storage solution, compared to other storage solutions available to my team...
1,145
74
Add raw data files to the Hub with GitHub LFS for canonical dataset I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term stor...
[ -1.2527854442596436, -1.03347909450531, -0.7845445275306702, 1.3781174421310425, -0.09623227268457413, -1.3032963275909424, 0.04748610407114029, -1.0734508037567139, 1.6342811584472656, -0.7610678672790527, 0.35408908128738403, -1.728800654411316, -0.06977874785661697, -0.5358561873435974,...
https://github.com/huggingface/datasets/issues/3164
Add raw data files to the Hub with GitHub LFS for canonical dataset
Hi @zlucia, As @julien-c pointed out, the way to store/host raw data files in our Hub is by using what we call "community" datasets: - either at your personal namespace: `load_dataset("zlucia/casehold")` - or at an organization namespace: for example, if you create the organization `reglab`, then `load_dataset("re...
I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term storage solution, compared to other storage solutions available to my team...
1,145
222
Add raw data files to the Hub with GitHub LFS for canonical dataset I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term stor...
[ -1.1749579906463623, -0.976858377456665, -0.7925111055374146, 1.393868088722229, -0.09657111018896103, -1.3233355283737183, 0.06094920262694359, -1.0533950328826904, 1.574756145477295, -0.6517153382301331, 0.34306833148002625, -1.7460047006607056, -0.10316997766494751, -0.5294932723045349,...
https://github.com/huggingface/datasets/issues/3164
Add raw data files to the Hub with GitHub LFS for canonical dataset
Ah I see, I think I was unclear whether there were benefits to uploading a canonical dataset vs. a community provided dataset. Thanks for clarifying. I'll see if we want to create an organization namespace and otherwise, will upload the dataset under my personal namespace.
I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term storage solution, compared to other storage solutions available to my team...
1,145
45
Add raw data files to the Hub with GitHub LFS for canonical dataset I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term stor...
[ -1.2852940559387207, -1.0434821844100952, -0.8062720894813538, 1.3362914323806763, -0.10872884094715118, -1.3458601236343384, 0.056563083082437515, -1.0520015954971313, 1.6428329944610596, -0.8153623938560486, 0.3243493139743805, -1.7059617042541504, -0.048681437969207764, -0.5002539157867...
https://github.com/huggingface/datasets/issues/3162
`datasets-cli test` should work with datasets without scripts
> It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). > > I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsT...
It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/t...
1,146
75
`datasets-cli test` should work with datasets without scripts It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (ht...
[ -1.1942999362945557, -0.968590259552002, -0.8166863918304443, 1.4075016975402832, -0.04359383136034012, -1.2315858602523804, -0.07663621753454208, -1.1036940813064575, 1.7049996852874756, -0.7670934200286865, 0.29267093539237976, -1.6851131916046143, -0.002896423451602459, -0.6527590155601...
https://github.com/huggingface/datasets/issues/3162
`datasets-cli test` should work with datasets without scripts
Hi ! You can run the command if you download the repository ``` git clone https://huggingface.co/datasets/huggingface/DataMeasurementsTest ``` and run the command ``` datasets-cli test DataMeasurementsTest/DataMeasurementsTest.py ``` (though on my side it doesn't manage to download the data since the dataset ...
It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/t...
1,146
43
`datasets-cli test` should work with datasets without scripts It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (ht...
[ -1.1792032718658447, -0.9796724915504456, -0.7932283282279968, 1.4510613679885864, -0.028500087559223175, -1.3242453336715698, -0.020790945738554, -1.11290442943573, 1.7967365980148315, -0.759889543056488, 0.3518850803375244, -1.741816520690918, 0.007011920213699341, -0.6176629662513733, ...
https://github.com/huggingface/datasets/issues/3162
`datasets-cli test` should work with datasets without scripts
> Hi ! You can run the command if you download the repository > > ``` > git clone https://huggingface.co/datasets/huggingface/DataMeasurementsTest > ``` > > and run the command > > ``` > datasets-cli test DataMeasurementsTest/DataMeasurementsTest.py > ``` > > (though on my side it doesn't manage to down...
It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/t...
1,146
80
`datasets-cli test` should work with datasets without scripts It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (ht...
[ -1.1593290567398071, -0.9312940835952759, -0.7875716090202332, 1.4619579315185547, -0.07452331483364105, -1.3411223888397217, -0.0540878102183342, -1.1503340005874634, 1.7490304708480835, -0.7590304613113403, 0.33235985040664673, -1.7311354875564575, 0.004972237162292004, -0.61855387687683...
https://github.com/huggingface/datasets/issues/3162
`datasets-cli test` should work with datasets without scripts
I think it's become private, but feel free to try with any other dataset like `lhoestq/test` for example at `https://huggingface.co/datasets/lhoestq/test`
It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/t...
1,146
20
`datasets-cli test` should work with datasets without scripts It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (ht...
[ -1.163761019706726, -0.9376861453056335, -0.8577979207038879, 1.4665597677230835, -0.026305940002202988, -1.2699612379074097, -0.01042240485548973, -1.0722997188568115, 1.6806116104125977, -0.7854085564613342, 0.338885635137558, -1.763633131980896, -0.03214478865265846, -0.5888956785202026...
https://github.com/huggingface/datasets/issues/3162
`datasets-cli test` should work with datasets without scripts
> I think it's become private, but feel free to try with any other dataset like `lhoestq/test` for example at `https://huggingface.co/datasets/lhoestq/test` your example repo and this page `https://huggingface.co/docs/datasets/add_dataset.html` helped me to solve.. thanks a lot
It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/t...
1,146
35
`datasets-cli test` should work with datasets without scripts It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (ht...
[ -1.150272250175476, -0.9474900960922241, -0.8543606996536255, 1.4536614418029785, -0.03737078979611397, -1.2567919492721558, -0.007090888451784849, -1.0671882629394531, 1.7060550451278687, -0.7607552409172058, 0.3495774567127228, -1.7745420932769775, -0.05551745370030403, -0.59219408035278...
https://github.com/huggingface/datasets/issues/3155
Illegal instruction (core dumped) at datasets import
It seems to be an issue with how conda-forge is building the binaries. It works on some machines, but not a machine with AMD Opteron 8384 processors.
## Describe the bug I install datasets using conda and when I import datasets I get: "Illegal instruction (core dumped)" ## Steps to reproduce the bug ``` conda create --prefix path/to/env conda activate path/to/env conda install -c huggingface -c conda-forge datasets # exits with output "Illegal instruction...
1,147
27
Illegal instruction (core dumped) at datasets import ## Describe the bug I install datasets using conda and when I import datasets I get: "Illegal instruction (core dumped)" ## Steps to reproduce the bug ``` conda create --prefix path/to/env conda activate path/to/env conda install -c huggingface -c conda-f...
[ -1.1604241132736206, -0.8478537201881409, -0.8028604388237, 1.4558141231536865, -0.07892651855945587, -1.3258373737335205, 0.08724900335073471, -1.0436246395111084, 1.664594292640686, -0.7512722611427307, 0.3415481746196747, -1.734424352645874, 0.10133332014083862, -0.5856462717056274, -...
https://github.com/huggingface/datasets/issues/3154
Sacrebleu unexpected behaviour/requirement for data format
Hi @BramVanroy! Good question. This project relies on PyArrow (tables) to store data too big to fit in RAM. In the case of metrics, this means that the number of predictions and references has to match to form a table. That's why your example throws an error even though it matches the schema: ```python refs = [...
## Describe the bug When comparing with the original `sacrebleu` implementation, the `datasets` implementation does some strange things that I do not quite understand. This issue was triggered when I was trying to implement TER and found the datasets implementation of BLEU [here](https://github.com/huggingface/dataset...
1,148
197
Sacrebleu unexpected behaviour/requirement for data format ## Describe the bug When comparing with the original `sacrebleu` implementation, the `datasets` implementation does some strange things that I do not quite understand. This issue was triggered when I was trying to implement TER and found the datasets impleme...
[ -1.2714515924453735, -0.9646323919296265, -0.7397177219390869, 1.4438555240631104, -0.17027974128723145, -1.267376184463501, 0.14462564885616302, -1.0257219076156616, 1.7350375652313232, -0.7997449040412903, 0.21138471364974976, -1.6854413747787476, -0.037718966603279114, -0.57683086395263...
https://github.com/huggingface/datasets/issues/3154
Sacrebleu unexpected behaviour/requirement for data format
Thanks, that makes sense. It is a bit unfortunate because it may be confusing to users since the input format is suddenly different than what they may expect from the underlying library/metric. But it is understandable due to how `datasets` works!
## Describe the bug When comparing with the original `sacrebleu` implementation, the `datasets` implementation does some strange things that I do not quite understand. This issue was triggered when I was trying to implement TER and found the datasets implementation of BLEU [here](https://github.com/huggingface/dataset...
1,148
41
Sacrebleu unexpected behaviour/requirement for data format ## Describe the bug When comparing with the original `sacrebleu` implementation, the `datasets` implementation does some strange things that I do not quite understand. This issue was triggered when I was trying to implement TER and found the datasets impleme...
[ -1.2714515924453735, -0.9646323919296265, -0.7397177219390869, 1.4438555240631104, -0.17027974128723145, -1.267376184463501, 0.14462564885616302, -1.0257219076156616, 1.7350375652313232, -0.7997449040412903, 0.21138471364974976, -1.6854413747787476, -0.037718966603279114, -0.57683086395263...
https://github.com/huggingface/datasets/issues/3148
Streaming with num_workers != 0
I can confirm that I was able to reproduce the bug. This seems odd given that #3423 reports duplicate data retrieval when `num_workers` and `streaming` are used together, which is obviously different from what is reported here.
## Describe the bug When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch. The code owner is likely @lhoestq ## Steps to reproduce the bug For your convenience, we've prepped a colab notebook th...
1,150
37
Streaming with num_workers != 0 ## Describe the bug When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch. The code owner is likely @lhoestq ## Steps to reproduce the bug For your convenience,...
[ -1.2899023294448853, -0.9265585541725159, -0.6499216556549072, 1.463195562362671, -0.19381313025951385, -1.187313437461853, 0.19972464442253113, -1.1369507312774658, 1.603285551071167, -0.8811686038970947, 0.27341601252555847, -1.6295222043991089, 0.015482775866985321, -0.5646023154258728,...
https://github.com/huggingface/datasets/issues/3148
Streaming with num_workers != 0
Any update? A possible solution is to have multiple arrow files as shards, and handle them like what webdatasets does. ![image](https://user-images.githubusercontent.com/11533479/148176637-72746b2c-c122-47aa-bbfe-224b13ee9a71.png) Pytorch's new dataset RFC is supporting sharding now, which may helps avoid duplicate...
## Describe the bug When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch. The code owner is likely @lhoestq ## Steps to reproduce the bug For your convenience, we've prepped a colab notebook th...
1,150
39
Streaming with num_workers != 0 ## Describe the bug When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch. The code owner is likely @lhoestq ## Steps to reproduce the bug For your convenience,...
[ -1.2767499685287476, -0.9275940656661987, -0.6532034873962402, 1.4667311906814575, -0.18468055129051208, -1.1943587064743042, 0.18698257207870483, -1.1339880228042603, 1.6053779125213623, -0.8631937503814697, 0.2601699233055115, -1.6242579221725464, 0.015748506411910057, -0.562455296516418...
https://github.com/huggingface/datasets/issues/3148
Streaming with num_workers != 0
Hi ! Thanks for the insights :) Note that in streaming mode there're usually no arrow files. The data are streamed from TAR, ZIP, text, etc. files directly from the web. Though for sharded datasets we can definitely adopt a similar strategy !
## Describe the bug When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch. The code owner is likely @lhoestq ## Steps to reproduce the bug For your convenience, we've prepped a colab notebook th...
1,150
43
Streaming with num_workers != 0 ## Describe the bug When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch. The code owner is likely @lhoestq ## Steps to reproduce the bug For your convenience,...
[ -1.2976053953170776, -0.933087170124054, -0.6422877311706543, 1.4641423225402832, -0.21196408569812775, -1.1920058727264404, 0.18144986033439636, -1.1504863500595093, 1.600388765335083, -0.8708008527755737, 0.25362494587898254, -1.630141258239746, 0.03506500646471977, -0.5512242317199707, ...
https://github.com/huggingface/datasets/issues/3145
[when Image type will exist] provide a way to get the data as binary + filename
@severo I'll keep that in mind. You can track progress on the Image feature in #3163 (still in the early stage).
**Is your feature request related to a problem? Please describe.** When a dataset cell contains a value of type Image (be it from a remote URL, an Array2D/3D, or any other way to represent images), I want to be able to write the image to the disk, with the correct filename, and optionally to know its mimetype, in or...
1,151
21
[when Image type will exist] provide a way to get the data as binary + filename **Is your feature request related to a problem? Please describe.** When a dataset cell contains a value of type Image (be it from a remote URL, an Array2D/3D, or any other way to represent images), I want to be able to write the image ...
[ -1.2407487630844116, -0.9941307902336121, -0.9032617211341858, 1.4533718824386597, -0.3362070322036743, -1.4420992136001587, 0.009026211686432362, -1.0757169723510742, 1.786736011505127, -0.8713910579681396, 0.34379836916923523, -1.7232316732406616, 0.15062753856182098, -0.7187554836273193...
https://github.com/huggingface/datasets/issues/3145
[when Image type will exist] provide a way to get the data as binary + filename
Hi ! As discussed with @severo offline it looks like the dataset viewer already supports reading PIL images, so maybe the dataset viewer doesn't need to disable decoding after all
**Is your feature request related to a problem? Please describe.** When a dataset cell contains a value of type Image (be it from a remote URL, an Array2D/3D, or any other way to represent images), I want to be able to write the image to the disk, with the correct filename, and optionally to know its mimetype, in or...
1,151
30
[when Image type will exist] provide a way to get the data as binary + filename **Is your feature request related to a problem? Please describe.** When a dataset cell contains a value of type Image (be it from a remote URL, an Array2D/3D, or any other way to represent images), I want to be able to write the image ...
[ -1.234841227531433, -0.9671058654785156, -0.9124277830123901, 1.425289273262024, -0.36845913529396057, -1.4150644540786743, 0.019938111305236816, -1.0685452222824097, 1.762600302696228, -0.884365975856781, 0.37013500928878784, -1.719658613204956, 0.14607559144496918, -0.738448441028595, ...
https://github.com/huggingface/datasets/issues/3142
Provide a way to write a streamed dataset to the disk
Yes, I agree this feature is much needed. We could do something similar to what TF does (https://www.tensorflow.org/api_docs/python/tf/data/Dataset#cache). Ideally, if the entire streamed dataset is consumed/cached, the generated cache should be reusable for the Arrow dataset.
**Is your feature request related to a problem? Please describe.** The streaming mode allows to get the 100 first rows of a dataset very quickly. But it does not cache the answer, so a posterior call to get the same 100 rows will send a request to the server again and again. **Describe the solution you'd like** ...
1,154
36
Provide a way to write a streamed dataset to the disk **Is your feature request related to a problem? Please describe.** The streaming mode allows to get the 100 first rows of a dataset very quickly. But it does not cache the answer, so a posterior call to get the same 100 rows will send a request to the server ag...
[ -1.2595843076705933, -1.0525587797164917, -0.7570509910583496, 1.4276875257492065, -0.25027838349342346, -1.3187587261199951, 0.002333391457796097, -1.0846755504608154, 1.7662547826766968, -0.7390720248222351, 0.18316933512687683, -1.7214207649230957, 0.115323506295681, -0.5730663537979126...
https://github.com/huggingface/datasets/issues/3135
Make inspect.get_dataset_config_names always return a non-empty list of configs
Hi @severo, I guess this issue requests not only to be able to access the configuration name (by using `inspect.get_dataset_config_names`), but the configuration itself as well (I mean you use the name to get the configuration afterwards, maybe using `builder_cls.builder_configs`), is this right?
**Is your feature request related to a problem? Please describe.** Currently, some datasets have a configuration, while others don't. It would be simpler for the user to always have configuration names to refer to **Describe the solution you'd like** In that sense inspect.get_dataset_config_names should always...
1,156
43
Make inspect.get_dataset_config_names always return a non-empty list of configs **Is your feature request related to a problem? Please describe.** Currently, some datasets have a configuration, while others don't. It would be simpler for the user to always have configuration names to refer to **Describe the sol...
[ -1.2288191318511963, -0.933009922504425, -0.8180731534957886, 1.3975074291229248, -0.17639899253845215, -1.2961366176605225, 0.09655627608299255, -1.157646894454956, 1.6987895965576172, -0.761380672454834, 0.24706926941871643, -1.6541571617126465, 0.024677693843841553, -0.6145434975624084,...
https://github.com/huggingface/datasets/issues/3135
Make inspect.get_dataset_config_names always return a non-empty list of configs
Yes, maybe the issue could be reformulated. As a user, I want to avoid having to manage special cases: - I want to be able to get the names of a dataset's configs, and use them in the rest of the API (get the data, get the split names, etc). - I don't want to have to manage datasets with named configs (`glue`) differ...
**Is your feature request related to a problem? Please describe.** Currently, some datasets have a configuration, while others don't. It would be simpler for the user to always have configuration names to refer to **Describe the solution you'd like** In that sense inspect.get_dataset_config_names should always...
1,156
71
Make inspect.get_dataset_config_names always return a non-empty list of configs **Is your feature request related to a problem? Please describe.** Currently, some datasets have a configuration, while others don't. It would be simpler for the user to always have configuration names to refer to **Describe the sol...
[ -1.225027322769165, -0.9524728655815125, -0.8186730146408081, 1.3867769241333008, -0.17411354184150696, -1.2927366495132446, 0.09044144302606583, -1.1496926546096802, 1.718186616897583, -0.7655543684959412, 0.2778894603252411, -1.6834838390350342, 0.004978185519576073, -0.6372475624084473,...
https://github.com/huggingface/datasets/issues/3134
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py
Hi, Did you try to run the code multiple times (GitHub URLs can be down sometimes for various reasons)? I can access `https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py`, so this code is working without an error on my side. Additionally, can you please run the `datasets-cli env`...
datasets version: 1.12.1 `metric = datasets.load_metric('rouge')` The error: > ConnectionError Traceback (most recent call last) > <ipython-input-3-dd10a0c5212f> in <module> > ----> 1 metric = datasets.load_metric('rouge') > > /usr/local/lib/python3.6/dist-packages/datasets/load....
1,157
58
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py datasets version: 1.12.1 `metric = datasets.load_metric('rouge')` The error: > ConnectionError Traceback (most recent call last) > <ipython-input-3-dd10a0c5212f> in <module> > ---->...
[ -1.2385988235473633, -0.9526150822639465, -0.5611421465873718, 1.4222878217697144, -0.14228807389736176, -1.1780242919921875, 0.18955332040786743, -1.0361979007720947, 1.493194341659546, -0.7912179827690125, 0.17293430864810944, -1.7060819864273071, -0.10105345398187637, -0.630240201950073...
https://github.com/huggingface/datasets/issues/3134
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py
Same issue when running `metric = datasets.load_metric("accuracy")`. Error info is: ``` metric = datasets.load_metric("accuracy") Traceback (most recent call last): File "<ipython-input-2-d25db38b26c5>", line 1, in <module> metric = datasets.load_metric("accuracy") File "D:\anaconda3\lib\site-package...
datasets version: 1.12.1 `metric = datasets.load_metric('rouge')` The error: > ConnectionError Traceback (most recent call last) > <ipython-input-3-dd10a0c5212f> in <module> > ----> 1 metric = datasets.load_metric('rouge') > > /usr/local/lib/python3.6/dist-packages/datasets/load....
1,157
103
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py datasets version: 1.12.1 `metric = datasets.load_metric('rouge')` The error: > ConnectionError Traceback (most recent call last) > <ipython-input-3-dd10a0c5212f> in <module> > ---->...
[ -1.2385988235473633, -0.9526150822639465, -0.5611421465873718, 1.4222878217697144, -0.14228807389736176, -1.1780242919921875, 0.18955332040786743, -1.0361979007720947, 1.493194341659546, -0.7912179827690125, 0.17293430864810944, -1.7060819864273071, -0.10105345398187637, -0.630240201950073...
https://github.com/huggingface/datasets/issues/3134
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py
It seems to be able to solve this issue by adding the equivalent `accuracy.py` locally. change `metric = datasets.load_metric("accuracy")` to `metric = datasets.load_metric(path = "./accuracy.py")`. Copy `accuracy.py` from browser at [accuracy.py](https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metric...
datasets version: 1.12.1 `metric = datasets.load_metric('rouge')` The error: > ConnectionError Traceback (most recent call last) > <ipython-input-3-dd10a0c5212f> in <module> > ----> 1 metric = datasets.load_metric('rouge') > > /usr/local/lib/python3.6/dist-packages/datasets/load....
1,157
31
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py datasets version: 1.12.1 `metric = datasets.load_metric('rouge')` The error: > ConnectionError Traceback (most recent call last) > <ipython-input-3-dd10a0c5212f> in <module> > ---->...
[ -1.2385988235473633, -0.9526150822639465, -0.5611421465873718, 1.4222878217697144, -0.14228807389736176, -1.1780242919921875, 0.18955332040786743, -1.0361979007720947, 1.493194341659546, -0.7912179827690125, 0.17293430864810944, -1.7060819864273071, -0.10105345398187637, -0.630240201950073...
https://github.com/huggingface/datasets/issues/3127
datasets-cli: convertion of a tfds dataset to a huggingface one.
Hi, the MNIST dataset is already available on the Hub. You can use it as follows: ```python import datasets dataset_dict = datasets.load_dataset("mnist") ``` As for the conversion of TFDS datasets to HF datasets, we will be working on it in the coming months, so stay tuned.
### Discussed in https://github.com/huggingface/datasets/discussions/3079 <div type='discussions-op-text'> <sup>Originally posted by **vitalyshalumov** October 14, 2021</sup> I'm trying to convert a tfds dataset to a huggingface one. I've tried: 1. datasets-cli convert --tfds_path ~/tensorflow_datas...
1,159
46
datasets-cli: convertion of a tfds dataset to a huggingface one. ### Discussed in https://github.com/huggingface/datasets/discussions/3079 <div type='discussions-op-text'> <sup>Originally posted by **vitalyshalumov** October 14, 2021</sup> I'm trying to convert a tfds dataset to a huggingface one. I've trie...
[ -1.1809792518615723, -0.939287543296814, -0.7410374283790588, 1.4134289026260376, -0.09243842959403992, -1.3572545051574707, 0.041762180626392365, -0.9655032753944397, 1.6887696981430054, -0.779238760471344, 0.2862279713153839, -1.7774150371551514, -0.06432335823774338, -0.4948586225509643...
https://github.com/huggingface/datasets/issues/3126
"arabic_billion_words" dataset does not create the full dataset
Thanks for reporting, @vitalyshalumov. Apparently the script to parse the data has a bug, and does not generate the entire dataset. I'm fixing it.
## Describe the bug When running: raw_dataset = load_dataset('arabic_billion_words','Alittihad') the correct dataset file is pulled from the url. But, the generated dataset includes just a small portion of the data included in the file. This is true for all other portions of the "arabic_billion_words" dataset ('A...
1,160
24
"arabic_billion_words" dataset does not create the full dataset ## Describe the bug When running: raw_dataset = load_dataset('arabic_billion_words','Alittihad') the correct dataset file is pulled from the url. But, the generated dataset includes just a small portion of the data included in the file. This is tru...
[ -1.1933249235153198, -0.8244751691818237, -0.700864315032959, 1.4450217485427856, -0.18942266702651978, -1.2654016017913818, 0.1458308845758438, -0.9651365876197815, 1.6838618516921997, -0.8260760307312012, 0.19554691016674042, -1.6898428201675415, -0.009398935362696648, -0.558305084705352...
https://github.com/huggingface/datasets/issues/3123
Segmentation fault when loading datasets from file
Hi ! I created an issue on Arrow's JIRA after making a minimum reproducible example https://issues.apache.org/jira/browse/ARROW-14439 ```python import io import pyarrow.json as paj batch = b'{"a": [], "b": 1}\n{"b": 1}' block_size = 12 paj.read_json( io.BytesIO(batch), read_options=paj.ReadOptions...
## Describe the bug Custom dataset loading sometimes segfaults and kills the process if chunks contain a variety of features/ ## Steps to reproduce the bug Download an example file: ``` wget https://gist.githubusercontent.com/TevenLeScao/11e2184394b3fa47d693de2550942c6b/raw/4232704d08fbfcaf93e5b51def9e50515076...
1,161
58
Segmentation fault when loading datasets from file ## Describe the bug Custom dataset loading sometimes segfaults and kills the process if chunks contain a variety of features/ ## Steps to reproduce the bug Download an example file: ``` wget https://gist.githubusercontent.com/TevenLeScao/11e2184394b3fa47d693...
[ -1.162652850151062, -0.8511763215065002, -0.7794715762138367, 1.5373163223266602, -0.12165305763483047, -1.2352145910263062, 0.14369723200798035, -1.029338002204895, 1.6951777935028076, -0.7259195446968079, 0.35328078269958496, -1.673929214477539, -0.0071411943063139915, -0.602613210678100...
https://github.com/huggingface/datasets/issues/3123
Segmentation fault when loading datasets from file
The issue has been fixed in pyarrow 6.0.0, please update pyarrow :) The issue was due to missing fields in the JSON data of type list. Now it's working fine and missing list fields are replaced with empty lists
## Describe the bug Custom dataset loading sometimes segfaults and kills the process if chunks contain a variety of features/ ## Steps to reproduce the bug Download an example file: ``` wget https://gist.githubusercontent.com/TevenLeScao/11e2184394b3fa47d693de2550942c6b/raw/4232704d08fbfcaf93e5b51def9e50515076...
1,161
39
Segmentation fault when loading datasets from file ## Describe the bug Custom dataset loading sometimes segfaults and kills the process if chunks contain a variety of features/ ## Steps to reproduce the bug Download an example file: ``` wget https://gist.githubusercontent.com/TevenLeScao/11e2184394b3fa47d693...
[ -1.1747406721115112, -0.8085299730300903, -0.7923359274864197, 1.4749140739440918, -0.12760832905769348, -1.239095687866211, 0.16308532655239105, -1.0328837633132935, 1.6969166994094849, -0.7629347443580627, 0.2841615080833435, -1.7005689144134521, 0.02284514158964157, -0.5960378050804138,...
https://github.com/huggingface/datasets/issues/3122
OSError with a custom dataset loading script
Hi, there is a difference in how the `data_dir` is zipped between the `classla/janes_tag` and the `classla/reldi_hr` dataset. After unzipping, for the former, the data files (`*.conllup`) are in the root directory (root -> data files), and for the latter, they are inside the `data` directory (root -> `data` -> data ...
## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory struc...
1,162
71
OSError with a custom dataset loading script ## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost i...
[ -1.2348365783691406, -0.9167017936706543, -0.6255295276641846, 1.4129399061203003, -0.15435607731342316, -1.219717264175415, 0.05580707639455795, -1.0339274406433105, 1.6345734596252441, -0.7894803285598755, 0.18790456652641296, -1.686458706855774, 0.013226320967078209, -0.5875965356826782...
https://github.com/huggingface/datasets/issues/3122
OSError with a custom dataset loading script
Hi Mario, I had already tried that before, but it didn't work. I have now recreated the `classla/janes_tag` zip file so that it also contains the `data` directory, but I am still getting the same error.
## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory struc...
1,162
36
OSError with a custom dataset loading script ## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost i...
[ -1.2348365783691406, -0.9167017936706543, -0.6255295276641846, 1.4129399061203003, -0.15435607731342316, -1.219717264175415, 0.05580707639455795, -1.0339274406433105, 1.6345734596252441, -0.7894803285598755, 0.18790456652641296, -1.686458706855774, 0.013226320967078209, -0.5875965356826782...
https://github.com/huggingface/datasets/issues/3122
OSError with a custom dataset loading script
Hi, I just tried to download the `classla/janes_tag` dataset, and this time the zip file is extracted correctly. However, the script is now throwing the IndexError, probably due to a bug in the `_generate_examples`. Let me know if you are still getting the same error.
## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory struc...
1,162
45
OSError with a custom dataset loading script ## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost i...
[ -1.2348365783691406, -0.9167017936706543, -0.6255295276641846, 1.4129399061203003, -0.15435607731342316, -1.219717264175415, 0.05580707639455795, -1.0339274406433105, 1.6345734596252441, -0.7894803285598755, 0.18790456652641296, -1.686458706855774, 0.013226320967078209, -0.5875965356826782...
https://github.com/huggingface/datasets/issues/3122
OSError with a custom dataset loading script
Hi, could you try to download the dataset with a different `cache_dir` like so: ```python import datasets dataset = datasets.load_dataset('classla/janes_tag', split='validation', cache_dir="path/to/different/cache/dir") ``` If this works, then most likely the cached extracted data is causing issues. This data ...
## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory struc...
1,162
84
OSError with a custom dataset loading script ## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost i...
[ -1.2348365783691406, -0.9167017936706543, -0.6255295276641846, 1.4129399061203003, -0.15435607731342316, -1.219717264175415, 0.05580707639455795, -1.0339274406433105, 1.6345734596252441, -0.7894803285598755, 0.18790456652641296, -1.686458706855774, 0.013226320967078209, -0.5875965356826782...
https://github.com/huggingface/datasets/issues/3122
OSError with a custom dataset loading script
Thank you, deleting the `~/.cache/huggingface/datasets/downloads/extracted` directory helped. However, I am still having problems. There was indeed a bug in the script that was throwing an `IndexError`, which I have now corrected (added the condition to skip the lines starting with '# text') and it is working locall...
## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory struc...
1,162
117
OSError with a custom dataset loading script ## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost i...
[ -1.2348365783691406, -0.9167017936706543, -0.6255295276641846, 1.4129399061203003, -0.15435607731342316, -1.219717264175415, 0.05580707639455795, -1.0339274406433105, 1.6345734596252441, -0.7894803285598755, 0.18790456652641296, -1.686458706855774, 0.013226320967078209, -0.5875965356826782...
https://github.com/huggingface/datasets/issues/3122
OSError with a custom dataset loading script
Hi, Did some investigation. To fix the dataset script on the Hub, append the following labels to the `names` list of the `upos_tags` field: ```'INTJ NOUN', 'AUX PRON', 'PART ADV', 'PRON ADP', 'INTJ INTJ', 'VERB NOUN', 'NOUN AUX'```. This step is required to avoid an error due to missing labels in the followin...
## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory struc...
1,162
84
OSError with a custom dataset loading script ## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost i...
[ -1.2348365783691406, -0.9167017936706543, -0.6255295276641846, 1.4129399061203003, -0.15435607731342316, -1.219717264175415, 0.05580707639455795, -1.0339274406433105, 1.6345734596252441, -0.7894803285598755, 0.18790456652641296, -1.686458706855774, 0.013226320967078209, -0.5875965356826782...
https://github.com/huggingface/datasets/issues/3119
Add OpenSLR 83 - Crowdsourced high-quality UK and Ireland English Dialect speech
Ugh. The index files for SLR83 are CSV, not TSV. I need to add logic to process these index files.
## Adding a Dataset - **Name:** *openslr** - **Description:** *Data set which contains male and female recordings of English from various dialects of the UK and Ireland.* - **Paper:** *https://www.openslr.org/resources/83/about.html* - **Data:** *Eleven separate data files can be found via https://www.openslr.org/r...
1,163
20
Add OpenSLR 83 - Crowdsourced high-quality UK and Ireland English Dialect speech ## Adding a Dataset - **Name:** *openslr** - **Description:** *Data set which contains male and female recordings of English from various dialects of the UK and Ireland.* - **Paper:** *https://www.openslr.org/resources/83/about.html* ...
[ -1.2155710458755493, -1.1286088228225708, -0.5822110176086426, 1.2496706247329712, -0.2718657851219177, -1.3179337978363037, 0.15315952897071838, -0.9607557654380798, 1.654067039489746, -0.5179534554481506, 0.2122470587491989, -1.697288990020752, -0.017263345420360565, -0.4677712023258209,...
https://github.com/huggingface/datasets/issues/3114
load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem
Hi ! Can you try again with pyarrow 6.0.0 ? I think it includes some changes regarding filesystems compatibility with fsspec.
## Describe the bug Passing a PyArrowHDFS implementation of fsspec.spec.AbstractFileSystem (in the `fs` param required by `load_from_disk` methods in `DatasetDict` (in datasets_dict.py) and `Dataset` (in arrow_dataset.py) results in an error when calling the download method in the `fs` parameter. ## Steps to repr...
1,164
21
load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem ## Describe the bug Passing a PyArrowHDFS implementation of fsspec.spec.AbstractFileSystem (in the `fs` param required by `load_from_disk` methods in `DatasetDict` (in datasets_dict.py) and `Datase...
[ -1.2320432662963867, -0.8434974551200867, -0.6415994167327881, 1.4545531272888184, -0.08525841683149338, -1.2882745265960693, 0.21978053450584412, -1.1482622623443604, 1.7084405422210693, -0.8320690989494324, 0.3824213445186615, -1.661045789718628, -0.004411435220390558, -0.615102231502533...
https://github.com/huggingface/datasets/issues/3114
load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem
Hi @lhoestq! I ended up using `fsspec.implementations.arrow.HadoopFileSystem` which doesn't have the problem I described with pyarrow 5.0.0. I'll try again with `PyArrowHDFS` once I update arrow to 6.0.0. Thanks!
## Describe the bug Passing a PyArrowHDFS implementation of fsspec.spec.AbstractFileSystem (in the `fs` param required by `load_from_disk` methods in `DatasetDict` (in datasets_dict.py) and `Dataset` (in arrow_dataset.py) results in an error when calling the download method in the `fs` parameter. ## Steps to repr...
1,164
29
load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem ## Describe the bug Passing a PyArrowHDFS implementation of fsspec.spec.AbstractFileSystem (in the `fs` param required by `load_from_disk` methods in `DatasetDict` (in datasets_dict.py) and `Datase...
[ -1.2320432662963867, -0.8434974551200867, -0.6415994167327881, 1.4545531272888184, -0.08525841683149338, -1.2882745265960693, 0.21978053450584412, -1.1482622623443604, 1.7084405422210693, -0.8320690989494324, 0.3824213445186615, -1.661045789718628, -0.004411435220390558, -0.615102231502533...
https://github.com/huggingface/datasets/issues/3113
Loading Data from HDF files
I would also like this support or something similar. Geospatial datasets come in netcdf which is derived from hdf5, or zarr. I've gotten zarr stores to work with datasets and streaming, but it takes awhile to convert the data to zarr if it's not stored in that natively.
**Is your feature request related to a problem? Please describe.** More often than not I come along big HDF datasets, and currently there is no straight forward way to feed them to a dataset. **Describe the solution you'd like** I would love to see a `from_h5` method that gets an interface implemented by the user ...
1,165
48
Loading Data from HDF files **Is your feature request related to a problem? Please describe.** More often than not I come along big HDF datasets, and currently there is no straight forward way to feed them to a dataset. **Describe the solution you'd like** I would love to see a `from_h5` method that gets an inte...
[ -1.2234126329421997, -1.0743210315704346, -0.6722300052642822, 1.281704306602478, -0.2383677363395691, -1.2452539205551147, 0.16791288554668427, -1.1577773094177246, 1.7835322618484497, -0.8676137328147888, 0.28145045042037964, -1.6657413244247437, 0.05179290845990181, -0.5536896586418152,...
https://github.com/huggingface/datasets/issues/3113
Loading Data from HDF files
@mariosasko , I would like to contribute on this "good second issue" . Is there anything in the works for this Issue or can I go ahead ?
**Is your feature request related to a problem? Please describe.** More often than not I come along big HDF datasets, and currently there is no straight forward way to feed them to a dataset. **Describe the solution you'd like** I would love to see a `from_h5` method that gets an interface implemented by the user ...
1,165
28
Loading Data from HDF files **Is your feature request related to a problem? Please describe.** More often than not I come along big HDF datasets, and currently there is no straight forward way to feed them to a dataset. **Describe the solution you'd like** I would love to see a `from_h5` method that gets an inte...
[ -1.2161248922348022, -1.07193124294281, -0.6916710138320923, 1.337469220161438, -0.25310900807380676, -1.293769359588623, 0.1386803239583969, -1.148705244064331, 1.777288794517517, -0.8647724986076355, 0.29142633080482483, -1.6751539707183838, 0.0755249485373497, -0.5529330968856812, -0....
https://github.com/huggingface/datasets/issues/3113
Loading Data from HDF files
Hi @VijayKalmath! As far as I know, nobody is working on it, so feel free to take over. Also, before you start, I suggest you comment `#self-assign` on this issue to assign it to yourself.
**Is your feature request related to a problem? Please describe.** More often than not I come along big HDF datasets, and currently there is no straight forward way to feed them to a dataset. **Describe the solution you'd like** I would love to see a `from_h5` method that gets an interface implemented by the user ...
1,165
35
Loading Data from HDF files **Is your feature request related to a problem? Please describe.** More often than not I come along big HDF datasets, and currently there is no straight forward way to feed them to a dataset. **Describe the solution you'd like** I would love to see a `from_h5` method that gets an inte...
[ -1.2121500968933105, -1.0587314367294312, -0.6765270829200745, 1.3023241758346558, -0.26700177788734436, -1.2752137184143066, 0.1686159074306488, -1.14130437374115, 1.8037394285202026, -0.8821200728416443, 0.31639227271080017, -1.6826906204223633, 0.030980534851551056, -0.5680145621299744,...
https://github.com/huggingface/datasets/issues/3112
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
I am very unsure on why you tagged me here. I am not a maintainer of the Datasets library and have no idea how to help you.
## Describe the bug Despite having batches way under 2Gb when running `datasets.map()`, after processing correctly the data of the first batch without fuss and irrespective of writer_batch_size (say 2,4,8,16,32,64 and 128 in my case), it returns the following error : > OverflowError: There was an overflow in the <c...
1,166
27
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB ## Describe the bug Despite having batches way under 2Gb when running `datasets.map()`, after processing correctly the data of the first batch without fuss and irrespective of...
[ -1.3111408948898315, -0.9018009305000305, -0.5220432877540588, 1.5846108198165894, -0.1323675662279129, -1.1428461074829102, 0.22946959733963013, -0.9254502654075623, 1.5601636171340942, -0.9610655307769775, 0.31086525321006775, -1.4772238731384277, 0.06510502099990845, -0.6179060339927673...
https://github.com/huggingface/datasets/issues/3112
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
Ok got it, tensor full of NaNs, cf. ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in write_examples_on_file(self) 315 # This check fails with FloatArrays with nans, which is not what we want, so account for that:
## Describe the bug Despite having batches way under 2Gb when running `datasets.map()`, after processing correctly the data of the first batch without fuss and irrespective of writer_batch_size (say 2,4,8,16,32,64 and 128 in my case), it returns the following error : > OverflowError: There was an overflow in the <c...
1,166
30
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB ## Describe the bug Despite having batches way under 2Gb when running `datasets.map()`, after processing correctly the data of the first batch without fuss and irrespective of...
[ -1.3111408948898315, -0.9018009305000305, -0.5220432877540588, 1.5846108198165894, -0.1323675662279129, -1.1428461074829102, 0.22946959733963013, -0.9254502654075623, 1.5601636171340942, -0.9610655307769775, 0.31086525321006775, -1.4772238731384277, 0.06510502099990845, -0.6179060339927673...