Edit model card

https://huggingface.co/apple/mobilevit-x-small with ONNX weights to be compatible with Transformers.js.

Usage (Transformers.js)

If you haven't already, you can install the Transformers.js JavaScript library from NPM using:

npm i @xenova/transformers

Example: Perform image classification with Xenova/mobilevit-x-small

import { pipeline } from '@xenova/transformers';

// Create an image classification pipeline
const classifier = await pipeline('image-classification', 'Xenova/mobilevit-x-small', {
    quantized: false,
});

// Classify an image
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/tiger.jpg';
const output = await classifier(url);
console.log(output);
// [{ label: 'tiger, Panthera tigris', score: 0.8842423558235168 }]

Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named onnx).

Downloads last month
15
Inference API
Drag image file here or click to browse from your device
Inference API (serverless) does not yet support transformers.js models for this pipeline type.