Transformers.js documentation

Use custom models

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Use custom models

By default, Transformers.js uses hosted pretrained models and precompiled WASM binaries, which should work out-of-the-box. You can customize this as follows:

Settings

import { env } from '@xenova/transformers';

// Specify a custom location for models (defaults to '/models/').
env.localModelPath = '/path/to/models/';

// Disable the loading of remote models from the Hugging Face Hub:
env.allowRemoteModels = false;

// Set location of .wasm files. Defaults to use a CDN.
env.backends.onnx.wasm.wasmPaths = '/path/to/files/';

For a full list of available settings, check out the API Reference.

Convert your models to ONNX

We recommend using our conversion script to convert your PyTorch, TensorFlow, or JAX models to ONNX in a single command. Behind the scenes, it uses πŸ€— Optimum to perform conversion and quantization of your model.

python -m scripts.convert --quantize --model_id <model_name_or_path>

For example, convert and quantize bert-base-uncased using:

python -m scripts.convert --quantize --model_id bert-base-uncased

This will save the following files to ./models/:

bert-base-uncased/
β”œβ”€β”€ config.json
β”œβ”€β”€ tokenizer.json
β”œβ”€β”€ tokenizer_config.json
└── onnx/
    β”œβ”€β”€ model.onnx
    └── model_quantized.onnx

For the full list of supported architectures, see the Optimum documentation.

< > Update on GitHub