SIGKILL issue when loading the data for lang code "ceb"

#17
by sabilmakbar - opened

Hi, I received a SIGKILL issue (memory exhaustion) when trying to load lang code "ceb" (Cebuano -- local lang in Philippines) and date version "20231101". This phenomenon for me does seem like a bug causing mem overflow since I'm using excessively large machine to run this code, imo (mem: 128GB) and other large lang can be imported with less resources

df = load_dataset(dset_name, language=lang_id, date=date_ver, beam_runner='DirectRunner', split="train").to_pandas()

and the last log before it's killed is:

generating examples from = /root/.cache/huggingface/datasets/downloads/1da85ce19a6d37fbe99458e68b4a15359854a743c4338160a722ba783c149267

which I believe the issue was caused by the load_dataset method rather than .to_pandas() method

My hypothesis of this caused from the "ceb" WikiMedia dump of pages-articles-multistream data isn't being split, unlike larger or similar sized languages like "zh" or "nl", or language that has single file but half of "ceb" data size like "id" (~1GB id vs ~2GB ceb), hence making the overall mem consumption for "ceb" is significantly bigger compared to 3 other langs.

If that's the cause, is there any workaround on this? Do we have to raise this issue related to WikiDump pipeline? If not, what are the possible causes leading to this?

After creating a workaround to split the bz2 data into multiple files and processing them by lazy approach, it successfully constructed the intended data.

Thanks for reporting, @sabilmakbar .

Note that we are going to deprecate this dataset: it only contains the pre-processed data for 6 of the languages for the 2022-03-01 dump.

I would recommend that you use the current official "wikimedia/wikipedia" dataset, with the pre-processed data for all the languages and for the latest dump 2023-11-01.

albertvillanova changed discussion status to closed

Thanks for the heads-up, @albertvillanova !

Sign up or log in to comment