Dataset Viewer issue: TooBigContentError

#3
by zinc75 - opened
Laboratoire de Mécanique des Structures et des Systèmes Couplés org

The dataset viewer is not working for speechless_clean and speechless_noisy subsets.

Could it be fixed somehow without reducing the size of the audio.* columns ?

Error details:

Error code:   TooBigContentError

cc @albertvillanova @lhoestq @severo .

It sometimes is due to the columns metadata being too big. Indeed, the dataset viewer API returns the column metadata along the rows, and we have a threshold on the response size. It's generally due to ClassLabel columns with a lot of classes. More details here: https://github.com/huggingface/dataset-viewer/issues/2215. Do you think it could make sense for your dataset? (I don't have access to the gated dataset, I filled a request)

Laboratoire de Mécanique des Structures et des Systèmes Couplés org

Hi @severo , thanks for your answer. I reviewed the pending access requests, and your username was not present. So I added access directly using your username.

In order to precise the encountered problem with the dataset viewer :

  • for the speech_clean subset, the datasets is displaying ok for page 1, but for any other page, it displays "The dataset viewer is not available for this split.Rows from parquet row groups are too big to be read: 470.70 MiB (max=286.10 MiB) Error code: TooBigContentError".

  • for the speech_noisy subset, this is exactly the same problem : ok for page 1, but for any other page, it displays "The dataset viewer is not available for this split. Rows from parquet row groups are too big to be read: 408.69 MiB (max=286.10 MiB) Error code: TooBigContentError"

  • for the speechless_clean and speechless_noisy subset, even the first page does not display : "The dataset viewer is not available for this split. Rows from parquet row groups are too big to be read: 3.22 GiB (max=286.10 MiB) Error code: TooBigContentError".

This is weird that page 1 displays well for speech_clean and speech_noisy subsets, since all other pages should contain approximately the same amount of data per row groups that page 1.

Thanks again for you support,

Best,

Eric

OK, interesting. It's 6 audio columns, so: every page requires the creation of 600 audio files. The first page is pre-computed and cached, while the following ones are computed on the fly, which might partly explain a difference in the behavior. But as you says that "other pages should contain approximately the same amount of data per row groups that page 1." I think we have some incoherence between how we limit the size of the first and the following pages.

Anyway, we clearly are currently limited to process audio data of this size. Adding content to https://github.com/huggingface/dataset-viewer/issues/2215

Sign up or log in to comment