Error while processing ES language

#13
by kononoff174 - opened

Hello, I have successfully processed multiple languages dataset with
dataset = load_dataset("wikipedia", language="en", date="20230401", beam_runner='DirectRunner', cache_dir="./hugging_data/")['train']
(en fr ru pt nl id it pl tr no ja fi vi uk sv de zh)

But I get an error with ES (spanish) language:

Traceback (most recent call last):
File "apache_beam/runners/common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam/runners/common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam/runners/common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "/nfs/home/nkononov/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py", line 1018, in _clean_content
text = _parse_and_clean_wikicode(raw_content, parser=mwparserfromhell, language=language)
File "/nfs/home/nkononov/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py", line 1098, in _parse_and_clean_wikicode
section_text.append(re.sub(re_rm_magic, "", section.strip_code().strip()))
File "/nfs/home/nkononov/anaconda3/envs/BERT/lib/python3.8/site-packages/mwparserfromhell/wikicode.py", line 666, in strip_code
stripped = node.strip(**kwargs)
File "/nfs/home/nkononov/anaconda3/envs/BERT/lib/python3.8/site-packages/mwparserfromhell/nodes/html_entity.py", line 61, in strip
return self.normalize()
File "/nfs/home/nkononov/anaconda3/envs/BERT/lib/python3.8/site-packages/mwparserfromhell/nodes/html_entity.py", line 157, in normalize
return chr(htmlentities.name2codepoint[self.value])
KeyError: '000nbsp'

Don't have ideas on how to fix this. Can you help me please?

Datasets Maintainers org

Thanks for reporting, @kononoff174 .

Note that we are going to deprecate this dataset: it only contains the pre-processed data for 6 of the languages.

I would recommend that you use the current official "wikimedia/wikipedia" dataset, with the pre-processed data for all the languages and for the latest dump 2023-11-01.

albertvillanova changed discussion status to closed

Sign up or log in to comment