Edit model card

CIFAR-10 Upside Down Classifier

For the Fatima Fellowship 2022 Coding Challenge, DL for Vision track.

W&B Report

Cover Image

Usage

Model Definition

from torch import nn
import timm
from huggingface_hub import PyTorchModelHubMixin


class UpDownEfficientNetB0(nn.Module, PyTorchModelHubMixin):
    """A simple Hub Mixin wrapper for timm EfficientNet-B0. Used to classify whether an image is upright or flipped down, on CIFAR-10."""

    def __init__(self, **kwargs):
        super().__init__()
        self.base_model = timm.create_model('efficientnet_b0', num_classes=1, drop_rate=0.2, drop_path_rate=0.2)
        self.config = kwargs.pop("config", None)

    def forward(self, input):
        return self.base_model(input)

Loading the Model from Hub

net = UpDownEfficientNetB0.from_pretrained("ID56/FF-Vision-CIFAR")

Running Inference

from torchvision import transforms

CIFAR_MEAN = (0.4914, 0.4822, 0.4465)
CIFAR_STD = (0.247, 0.243, 0.261)

transform = transforms.Compose([
    transforms.Resize(40, 40),
    transforms.ToTensor(),
    transforms.Normalize(CIFAR_MEAN, CIFAR_STD)
])

image = load_some_image()  # Load some PIL Image or uint8 HWC image array
image = transform(image)   # Convert to CHW image tensor
image = image.unsqueeze(0) # Add batch dimension

net.eval()

pred = net(image)
Downloads last month
0
Inference API
Drag image file here or click to browse from your device
Inference API (serverless) has been turned off for this model.

Dataset used to train ID56/FF-Vision-CIFAR