Dataset Viewer issue: TooManyColumnsError

#2
by aidystark - opened

The dataset viewer is not working.

Error details:

Error code:   TooManyColumnsError

cc @albertvillanova @lhoestq @severo .

Hi, as explained in the error message:

The number of columns (41631) exceeds the maximum supported number of columns (1000). This is a current limitation of the datasets viewer. You can reduce the number of columns if you want the viewer to work.

We don't plan to increase the number of supported columns for now, even though we have a related issue you can upvote: https://github.com/huggingface/datasets-server/issues/1172. It proposes to truncate the columns to show the first 1,000 ones, instead of showing nothing.

Please do you think there is any issue with this code because it was originally intended to have just two columns

it is the rows that is supposed to be 41631

import datasets
from datasets.tasks import ImageClassification
import pandas as pd
import json
import requests

_HOMEPAGE = "https://huggingface.co/datasets/aidystark/shoe41k"

_DESCRIPTION = (
"----------------------------------------"
)

_CITATION = """\

"""

_LICENSE = """
LICENSE AGREEMENT
=================
"""

_NAMES = ['Dressing Shoe', 'Boot', 'Crocs', 'Heels', 'Sandals', 'Sneakers']
_CSV = "https://huggingface.co/datasets/aidystark/shoe41k/resolve/main/FOOT40K.csv"
_URL = "https://huggingface.co/datasets/aidystark/shoe41k/resolve/main/shoe40k"

df = pd.read_csv(_CSV)
imgLabels = df['Label']

class shoe40k(datasets.GeneratorBasedBuilder):
"""-------"""

def _info(self):
    return datasets.DatasetInfo(
        description=_DESCRIPTION,
        features=datasets.Features(
            {
                "image": datasets.Image(),
                "label": datasets.ClassLabel(names=_NAMES),
            }
        ),
        supervised_keys=("image", "label"),
        homepage=_HOMEPAGE,
        citation=_CITATION,
        license=_LICENSE,
        task_templates=[ImageClassification(image_column="image", label_column="label")],
    )

def _split_generators(self, dl_manager):
    path        = dl_manager.download(_URL)
    image_iters = dl_manager.iter_archive(path)
    return [datasets.SplitGenerator(datasets.Split.TRAIN,gen_kwargs={"images":image_iters,})]


def _generate_examples(self, images):
    """Generate images and labels for splits."""
    idx = 0
    #Iterate through images
    for filepath,image in images:
        yield idx, {
            "image":{"path":filepath, "bytes":image.read()},
            "label":imgLabels[idx]
        }
        idx += 1

This is how my pandas dataframe looks like
file_name,Label
SHOEDR1175.jpg,Dressing Shoe
SHOEDR1177.jpg,Dressing Shoe
SHOEDR1178.jpg,Dressing Shoe
SHOEDR1179.jpg,Dressing Shoe
SHOEDR1180.jpg,Dressing Shoe
SHOEDR1181.jpg,Dressing Shoe
SHOEDR1182.jpg,Dressing Shoe
SHOEDR1185.jpg,Dressing Shoe
SHOEDR1186.jpg,Dressing Shoe
SHOEDR1188.jpg,Dressing Shoe
SHOEDR1189.jpg,Dressing Shoe
SHOEDR1190.jpg,Dressing Shoe
SHOEDR1192.jpg,Dressing Shoe

Have you tried to use push_to_hub() once you've loaded your dataset locally? It will push the data as parquet files, and should make the viewer work

Sign up or log in to comment