Hub Python Library documentation

Inference Endpoints

You are viewing v0.24.5 version. A newer version v0.26.3 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Inference Endpoints

Inference Endpoints provides a secure production solution to easily deploy models on a dedicated and autoscaling infrastructure managed by Hugging Face. An Inference Endpoint is built from a model from the Hub. This page is a reference for huggingface_hub’s integration with Inference Endpoints. For more information about the Inference Endpoints product, check out its official documentation.

Check out the related guide to learn how to use huggingface_hub to manage your Inference Endpoints programmatically.

Inference Endpoints can be fully managed via API. The endpoints are documented with Swagger. The InferenceEndpoint class is a simple wrapper built on top on this API.

Methods

A subset of the Inference Endpoint features are implemented in HfApi:

InferenceEndpoint

The main dataclass is InferenceEndpoint. It contains information about a deployed InferenceEndpoint, including its configuration and current state. Once deployed, you can run inference on the Endpoint using the InferenceEndpoint.client and InferenceEndpoint.async_client properties that respectively return an InferenceClient and an AsyncInferenceClient object.

class huggingface_hub.InferenceEndpoint

< >

( namespace: str raw: Dict _token: Union _api: HfApi )

Parameters

  • name (str) — The unique name of the Inference Endpoint.
  • namespace (str) — The namespace where the Inference Endpoint is located.
  • repository (str) — The name of the model repository deployed on this Inference Endpoint.
  • status (InferenceEndpointStatus) — The current status of the Inference Endpoint.
  • url (str, optional) — The URL of the Inference Endpoint, if available. Only a deployed Inference Endpoint will have a URL.
  • framework (str) — The machine learning framework used for the model.
  • revision (str) — The specific model revision deployed on the Inference Endpoint.
  • task (str) — The task associated with the deployed model.
  • created_at (datetime.datetime) — The timestamp when the Inference Endpoint was created.
  • updated_at (datetime.datetime) — The timestamp of the last update of the Inference Endpoint.
  • type (InferenceEndpointType) — The type of the Inference Endpoint (public, protected, private).
  • raw (Dict) — The raw dictionary data returned from the API.
  • token (str or bool, optional) — Authentication token for the Inference Endpoint, if set when requesting the API. Will default to the locally saved token if not provided. Pass token=False if you don’t want to send your token to the server.

Contains information about a deployed Inference Endpoint.

Example:

>>> from huggingface_hub import get_inference_endpoint
>>> endpoint = get_inference_endpoint("my-text-to-image")
>>> endpoint
InferenceEndpoint(name='my-text-to-image', ...)

# Get status
>>> endpoint.status
'running'
>>> endpoint.url
'https://my-text-to-image.region.vendor.endpoints.huggingface.cloud'

# Run inference
>>> endpoint.client.text_to_image(...)

# Pause endpoint to save $$$
>>> endpoint.pause()

# ...
# Resume and wait for deployment
>>> endpoint.resume()
>>> endpoint.wait()
>>> endpoint.client.text_to_image(...)

from_raw

< >

( raw: Dict namespace: str token: Union = None api: Optional = None )

Initialize object from raw dictionary.

client

< >

( ) β†’ InferenceClient

Returns

InferenceClient

an inference client pointing to the deployed endpoint.

Raises

InferenceEndpointError

Returns a client to make predictions on this Inference Endpoint.

async_client

< >

( ) β†’ AsyncInferenceClient

Returns

AsyncInferenceClient

an asyncio-compatible inference client pointing to the deployed endpoint.

Raises

InferenceEndpointError

Returns a client to make predictions on this Inference Endpoint.

delete

< >

( )

Delete the Inference Endpoint.

This operation is not reversible. If you don’t want to be charged for an Inference Endpoint, it is preferable to pause it with InferenceEndpoint.pause() or scale it to zero with InferenceEndpoint.scale_to_zero().

This is an alias for HfApi.delete_inference_endpoint().

fetch

< >

( ) β†’ InferenceEndpoint

Returns

InferenceEndpoint

the same Inference Endpoint, mutated in place with the latest data.

Fetch latest information about the Inference Endpoint.

pause

< >

( ) β†’ InferenceEndpoint

Returns

InferenceEndpoint

the same Inference Endpoint, mutated in place with the latest data.

Pause the Inference Endpoint.

A paused Inference Endpoint will not be charged. It can be resumed at any time using InferenceEndpoint.resume(). This is different than scaling the Inference Endpoint to zero with InferenceEndpoint.scale_to_zero(), which would be automatically restarted when a request is made to it.

This is an alias for HfApi.pause_inference_endpoint(). The current object is mutated in place with the latest data from the server.

resume

< >

( running_ok: bool = True ) β†’ InferenceEndpoint

Parameters

  • running_ok (bool, optional) — If True, the method will not raise an error if the Inference Endpoint is already running. Defaults to True.

Returns

InferenceEndpoint

the same Inference Endpoint, mutated in place with the latest data.

Resume the Inference Endpoint.

This is an alias for HfApi.resume_inference_endpoint(). The current object is mutated in place with the latest data from the server.

scale_to_zero

< >

( ) β†’ InferenceEndpoint

Returns

InferenceEndpoint

the same Inference Endpoint, mutated in place with the latest data.

Scale Inference Endpoint to zero.

An Inference Endpoint scaled to zero will not be charged. It will be resume on the next request to it, with a cold start delay. This is different than pausing the Inference Endpoint with InferenceEndpoint.pause(), which would require a manual resume with InferenceEndpoint.resume().

This is an alias for HfApi.scale_to_zero_inference_endpoint(). The current object is mutated in place with the latest data from the server.

update

< >

( accelerator: Optional = None instance_size: Optional = None instance_type: Optional = None min_replica: Optional = None max_replica: Optional = None repository: Optional = None framework: Optional = None revision: Optional = None task: Optional = None ) β†’ InferenceEndpoint

Parameters

  • accelerator (str, optional) — The hardware accelerator to be used for inference (e.g. "cpu").
  • instance_size (str, optional) — The size or type of the instance to be used for hosting the model (e.g. "x4").
  • instance_type (str, optional) — The cloud instance type where the Inference Endpoint will be deployed (e.g. "intel-icl").
  • min_replica (int, optional) — The minimum number of replicas (instances) to keep running for the Inference Endpoint.
  • max_replica (int, optional) — The maximum number of replicas (instances) to scale to for the Inference Endpoint.
  • repository (str, optional) — The name of the model repository associated with the Inference Endpoint (e.g. "gpt2").
  • framework (str, optional) — The machine learning framework used for the model (e.g. "custom").
  • revision (str, optional) — The specific model revision to deploy on the Inference Endpoint (e.g. "6c0e6080953db56375760c0471a8c5f2929baf11").
  • task (str, optional) — The task on which to deploy the model (e.g. "text-classification").

Returns

InferenceEndpoint

the same Inference Endpoint, mutated in place with the latest data.

Update the Inference Endpoint.

This method allows the update of either the compute configuration, the deployed model, or both. All arguments are optional but at least one must be provided.

This is an alias for HfApi.update_inference_endpoint(). The current object is mutated in place with the latest data from the server.

wait

< >

( timeout: Optional = None refresh_every: int = 5 ) β†’ InferenceEndpoint

Parameters

  • timeout (int, optional) — The maximum time to wait for the Inference Endpoint to be deployed, in seconds. If None, will wait indefinitely.
  • refresh_every (int, optional) — The time to wait between each fetch of the Inference Endpoint status, in seconds. Defaults to 5s.

Returns

InferenceEndpoint

the same Inference Endpoint, mutated in place with the latest data.

Raises

InferenceEndpointError or InferenceEndpointTimeoutError

  • InferenceEndpointError β€” If the Inference Endpoint ended up in a failed state.
  • InferenceEndpointTimeoutError β€” If the Inference Endpoint is not deployed after timeout seconds.

Wait for the Inference Endpoint to be deployed.

Information from the server will be fetched every 1s. If the Inference Endpoint is not deployed after timeout seconds, a InferenceEndpointTimeoutError will be raised. The InferenceEndpoint will be mutated in place with the latest data.

InferenceEndpointStatus

class huggingface_hub.InferenceEndpointStatus

< >

( value names = None module = None qualname = None type = None start = 1 )

An enumeration.

InferenceEndpointType

class huggingface_hub.InferenceEndpointType

< >

( value names = None module = None qualname = None type = None start = 1 )

An enumeration.

InferenceEndpointError

class huggingface_hub.InferenceEndpointError

< >

( )

Generic exception when dealing with Inference Endpoints.

< > Update on GitHub