Please refer to Inference API Documentation for detailed information.
For 🤗 Transformers models, Pipelines power the API.
On top of Pipelines
and depending on the model type, there are several production optimizations like:
For models from other libraries, the API uses Starlette and runs in Docker containers. Each library defines the implementation of different pipelines.
Specify inference: false
in your model card’s metadata.
If you are interested in accelerated inference, higher volumes of requests, or an SLA, please contact us at api-enterprise at huggingface.co
.
You can head to the Inference API dashboard. Learn more about it in the Inference API documentation.
Yes, the huggingface_hub
library has a client wrapper documented here.