Joshua Lochner
commited on
Commit
•
6866191
1
Parent(s):
6696705
Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,32 @@
|
|
1 |
---
|
2 |
tags:
|
3 |
- image-segmentation
|
4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
tags:
|
3 |
- image-segmentation
|
4 |
+
- generic
|
5 |
+
library_name: generic
|
6 |
+
dataset:
|
7 |
+
- oxfort-iit pets
|
8 |
+
widget:
|
9 |
+
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-1.jpg
|
10 |
+
example_title: Kedis
|
11 |
+
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-2.jpg
|
12 |
+
example_title: Cat in a Crate
|
13 |
+
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-3.jpg
|
14 |
+
example_title: Two Cats Chilling
|
15 |
+
license: cc0-1.0
|
16 |
+
---
|
17 |
+
## Keras semantic segmentation models on the 🤗Hub! 🐶 🐕 🐩
|
18 |
+
Full credits go to [François Chollet](https://twitter.com/fchollet).
|
19 |
+
|
20 |
+
This repository contains the model from [this notebook on segmenting pets using U-net-like architecture](https://keras.io/examples/vision/oxford_pets_image_segmentation/). We've changed the inference part to enable segmentation widget on the Hub. (see ```pipeline.py```)
|
21 |
+
|
22 |
+
## Background Information
|
23 |
+
|
24 |
+
Image classification task tells us about a class assigned to an image, and object detection task creates a boundary box on an object in an image. But what if we want to know about the shape of the image? Segmentation models helps us segment images and reveal their shapes. It has many variants, including, panoptic segmentation, instance segmentation and semantic segmentation.This post is on hosting your Keras semantic segmentation models on Hub.
|
25 |
+
Semantic segmentation models classify pixels, meaning, they assign a class (can be cat or dog) to each pixel. The output of a model looks like following.
|
26 |
+
![Raw Output](./raw_output.jpg)
|
27 |
+
We need to get the best prediction for every pixel.
|
28 |
+
![Mask](./mask.jpg)
|
29 |
+
This is still not readable. We have to convert this into different binary masks for each class and convert to a readable format by converting each mask into base64. We will return a list of dicts, and for each dictionary, we have the label itself, the base64 code and a score (semantic segmentation models don't return a score, so we have to return 1.0 for this case). You can find the full implementation in ```pipeline.py```.
|
30 |
+
![Binary Mask](./binary_mask.jpg)
|
31 |
+
Now that you know the expected output by the model, you can host your Keras segmentation models (and other semantic segmentation models) in the similar fashion. Try it yourself and host your segmentation models!
|
32 |
+
![Segmented Cat](./hircin_the_cat.png)
|