Regarding creating an image segmentation dataset

#4
by greathero - opened

Hello,

For a project I'm trying to create a similar image segmentation dataset, with images and labels. My images and labels seem to work okay (in RGB; 2 classes; 0 is an "extra class", 1 is "not labeled", and 255 is "labeled); I've also added the .json labels file.

In this notebook (https://github.com/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb), when I change the parameter describing the dataset to "greathero/newcontrailsvalidationdataset", the notebook seems to work completely fine, until the model loss becomes nan.

Perhaps this is because my dataset's missing something; there are two things my dataset seems to be missing: A file giving information on the dataset, and then a bunch of other tables in the .duckdb file.

The file giving dataset information is small, and doesn't seem to have much to do with training the model. The duckdb file, on the other hand, is "quite larger" than the dataset, even though the duckdb file associated with my dataset is around the same size as my dataset.

Some things on the duckdb dataset there confuse me - a stopwords table, a docs table, etc.; what is occurring and could this have something to do with why my model has difficulty training on the dataset?

Hi @greathero . We are not the authors of the notebook you linked, so it's hard for us to debug your specific issue. For example, I don't know why you'd need duckdb files.
One issue could be the encoding of your segmentation bitmaps perhaps. In this dataset, the segmentation bitmaps are 32-bit RGBA png images. The alpha channel is set to 255, and the remaining 24-bit values in the RGB channels correspond to the object ids in the annotations list. Unlabeled regions should have a value of 0.
If this is not the issue, I suggest you contact the authors of the notebook you linked.

tobiasc changed discussion status to closed

Sign up or log in to comment