Please refer to Inference API Documentation for detailed information.
For 🤗 Transformers models, Pipelines power the API.
On top of Pipelines
and depending on the model type, there are several production optimizations like:
For models from other libraries, the API uses Starlette and runs in Docker containers. Each library defines the implementation of different pipelines.
Specify inference: false
in your model card’s metadata.
For some tasks, there might not be support in the inference API, and, hence, there is no widget.
For all libraries (except 🤗 Transformers), there is a mapping of library to supported tasks in the API. When a model repository has a task that is not supported by the repository library, the repository has inference: false
by default.
If you are interested in accelerated inference, higher volumes of requests, or an SLA, please contact us at api-enterprise at huggingface.co
.
You can head to the Inference API dashboard. Learn more about it in the Inference API documentation.
Yes, the huggingface_hub
library has a client wrapper documented here.