https://huggingface.co/facebook/MobileLLM-125M with ONNX weights to be compatible with Transformers.js.
Usage (Transformers.js)
If you haven't already, you can install the Transformers.js JavaScript library from NPM using:
npm i @huggingface/transformers
Example: Text generation with onnx-community/MobileLLM-125M
.
import { pipeline } from "@huggingface/transformers";
// Create a text generation pipeline
const generator = await pipeline(
"text-generation",
"onnx-community/MobileLLM-125M",
{ dtype: "fp32" },
);
// Define the list of messages
const text = "Q: What is the capital of France?\nA: Paris\nQ: What is the capital of England?\nA:";
// Generate a response
const output = await generator(text, { max_new_tokens: 30 });
console.log(output[0].generated_text);
Example output
Q: What is the capital of France?
A: Paris
Q: What is the capital of England?
A: London
Q: What is the capital of Scotland?
A: Edinburgh
Q: What is the capital of Wales?
A: Cardiff
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named onnx
).
- Downloads last month
- 70
Inference API (serverless) does not yet support model repos that contain custom code.
Model tree for onnx-community/MobileLLM-125M
Base model
facebook/MobileLLM-125M