Hub Python Library documentation

HfApi Client

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.22.2).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

HfApi Client

Below is the documentation for the HfApi class, which serves as a Python wrapper for the Hugging Face Hub’s API.

All methods from the HfApi are also accessible from the package’s root directly. Both approaches are detailed below.

Using the root method is more straightforward but the HfApi class gives you more flexibility. In particular, you can pass a token that will be reused in all HTTP calls. This is different than huggingface-cli login or login() as the token is not persisted on the machine. It is also possible to provide a different endpoint or configure a custom user-agent.

from huggingface_hub import HfApi, list_models

# Use root method
models = list_models()

# Or configure a HfApi client
hf_api = HfApi(
    endpoint="https://huggingface.co", # Can be a Private Hub endpoint.
    token="hf_xxx", # Token is not persisted on the machine.
)
models = hf_api.list_models()

HfApi

class huggingface_hub.HfApi

< >

( endpoint: Optional[str] = None token: Union[str, bool, None] = None library_name: Optional[str] = None library_version: Optional[str] = None user_agent: Union[Dict, str, None] = None headers: Optional[Dict[str, str]] = None )

accept_access_request

< >

( repo_id: str user: str repo_type: Optional[str] = None token: Optional[str] = None )

Parameters

  • repo_id (str) — The id of the repo to accept access request for.
  • user (str) — The username of the user which access request should be accepted.
  • repo_type (str, optional) — The type of the repo to accept access request for. Must be one of model, dataset or space. Defaults to model.
  • token (str, optional) — A valid authentication token (see https://huggingface.co/settings/token).

Raises

HTTPError

  • HTTPError — HTTP 400 if the repo is not gated.
  • HTTPError — HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t have write or admin role in the organization the repo belongs to or if you passed a read token.
  • HTTPError — HTTP 404 if the user does not exist on the Hub.
  • HTTPError — HTTP 404 if the user access request cannot be found.
  • HTTPError — HTTP 404 if the user access request is already in the accepted list.

Accept an access request from a user for a given gated repo.

Once the request is accepted, the user will be able to download any file of the repo and access the community tab. If the approval mode is automatic, you don’t have to accept requests manually. An accepted request can be cancelled or rejected at any time using cancel_access_request() and reject_access_request().

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.

add_collection_item

< >

( collection_slug: str item_id: str item_type: CollectionItemType_T note: Optional[str] = None exists_ok: bool = False token: Optional[str] = None )

Parameters

  • collection_slug (str) — Slug of the collection to update. Example: "TheBloke/recent-models-64f9a55bb3115b4f513ec026".
  • item_id (str) — ID of the item to add to the collection. It can be the ID of a repo on the Hub (e.g. "facebook/bart-large-mnli") or a paper id (e.g. "2307.09288").
  • item_type (str) — Type of the item to add. Can be one of "model", "dataset", "space" or "paper".
  • note (str, optional) — A note to attach to the item in the collection. The maximum size for a note is 500 characters.
  • exists_ok (bool, optional) — If True, do not raise an error if item already exists.
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Raises

HTTPError

  • HTTPError — HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t have write or admin role in the organization the repo belongs to or if you passed a read token.
  • HTTPError — HTTP 404 if the item you try to add to the collection does not exist on the Hub.
  • HTTPError — HTTP 409 if the item you try to add to the collection is already in the collection (and exists_ok=False)

Add an item to a collection on the Hub.

Returns: Collection

Example:

>>> from huggingface_hub import add_collection_item
>>> collection = add_collection_item(
...     collection_slug="davanstrien/climate-64f99dc2a5067f6b65531bab",
...     item_id="pierre-loic/climate-news-articles",
...     item_type="dataset"
... )
>>> collection.items[-1].item_id
"pierre-loic/climate-news-articles"
# ^item got added to the collection on last position

# Add item with a note
>>> add_collection_item(
...     collection_slug="davanstrien/climate-64f99dc2a5067f6b65531bab",
...     item_id="datasets/climate_fever",
...     item_type="dataset"
...     note="This dataset adopts the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet."
... )
(...)

add_space_secret

< >

( repo_id: str key: str value: str description: Optional[str] = None token: Optional[str] = None )

Parameters

  • repo_id (str) — ID of the repo to update. Example: "bigcode/in-the-stack".
  • key (str) — Secret key. Example: "GITHUB_API_KEY"
  • value (str) — Secret value. Example: "your_github_api_key".
  • description (str, optional) — Secret description. Example: "Github API key to access the Github API".
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Adds or updates a secret in a Space.

Secrets allow to set secret keys or tokens to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.

add_space_variable

< >

( repo_id: str key: str value: str description: Optional[str] = None token: Optional[str] = None )

Parameters

  • repo_id (str) — ID of the repo to update. Example: "bigcode/in-the-stack".
  • key (str) — Variable key. Example: "MODEL_REPO_ID"
  • value (str) — Variable value. Example: "the_model_repo_id".
  • description (str) — Description of the variable. Example: "Model Repo ID of the implemented model".
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Adds or updates a variable in a Space.

Variables allow to set environment variables to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables

cancel_access_request

< >

( repo_id: str user: str repo_type: Optional[str] = None token: Optional[str] = None )

Parameters

  • repo_id (str) — The id of the repo to cancel access request for.
  • user (str) — The username of the user which access request should be cancelled.
  • repo_type (str, optional) — The type of the repo to cancel access request for. Must be one of model, dataset or space. Defaults to model.
  • token (str, optional) — A valid authentication token (see https://huggingface.co/settings/token).

Raises

HTTPError

  • HTTPError — HTTP 400 if the repo is not gated.
  • HTTPError — HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t have write or admin role in the organization the repo belongs to or if you passed a read token.
  • HTTPError — HTTP 404 if the user does not exist on the Hub.
  • HTTPError — HTTP 404 if the user access request cannot be found.
  • HTTPError — HTTP 404 if the user access request is already in the pending list.

Cancel an access request from a user for a given gated repo.

A cancelled request will go back to the pending list and the user will lose access to the repo.

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.

change_discussion_status

< >

( repo_id: str discussion_num: int new_status: Literal['open', 'closed'] token: Optional[str] = None comment: Optional[str] = None repo_type: Optional[str] = None ) DiscussionStatusChange

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • discussion_num (int) — The number of the Discussion or Pull Request . Must be a strictly positive integer.
  • new_status (str) — The new status for the discussion, either "open" or "closed".
  • comment (str, optional) — An optional comment to post with the status change.
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token)

Returns

DiscussionStatusChange

the status change event

Closes or re-opens a Discussion or Pull Request.

Examples:

>>> new_title = "New title, fixing a typo"
>>> HfApi().rename_discussion(
...     repo_id="username/repo_name",
...     discussion_num=34
...     new_title=new_title
... )
# DiscussionStatusChange(id='deadbeef0000000', type='status-change', ...)

Raises the following errors:

  • HTTPError if the HuggingFace API returned an error
  • ValueError if some parameter value is invalid
  • RepositoryNotFoundError If the repository to download from cannot be found. This may be because it doesn’t exist, or because it is set to private and you do not have access.

comment_discussion

< >

( repo_id: str discussion_num: int comment: str token: Optional[str] = None repo_type: Optional[str] = None ) DiscussionComment

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • discussion_num (int) — The number of the Discussion or Pull Request . Must be a strictly positive integer.
  • comment (str) — The content of the comment to create. Comments support markdown formatting.
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token)

Returns

DiscussionComment

the newly created comment

Creates a new comment on the given Discussion.

Examples:


>>> comment = """
... Hello @otheruser!
...
... # This is a title
...
... **This is bold**, *this is italic* and ~this is strikethrough~
... And [this](http://url) is a link
... """

>>> HfApi().comment_discussion(
...     repo_id="username/repo_name",
...     discussion_num=34
...     comment=comment
... )
# DiscussionComment(id='deadbeef0000000', type='comment', ...)

Raises the following errors:

  • HTTPError if the HuggingFace API returned an error
  • ValueError if some parameter value is invalid
  • RepositoryNotFoundError If the repository to download from cannot be found. This may be because it doesn’t exist, or because it is set to private and you do not have access.

create_branch

< >

( repo_id: str branch: str revision: Optional[str] = None token: Optional[str] = None repo_type: Optional[str] = None exist_ok: bool = False )

Parameters

  • repo_id (str) — The repository in which the branch will be created. Example: "user/my-cool-model".
  • branch (str) — The name of the branch to create.
  • revision (str, optional) — The git revision to create the branch from. It can be a branch name or the OID/SHA of a commit, as a hexadecimal string. Defaults to the head of the "main" branch.
  • token (str, optional) — Authentication token. Will default to the stored token.
  • repo_type (str, optional) — Set to "dataset" or "space" if creating a branch on a dataset or space, None or "model" if tagging a model. Default is None.
  • exist_ok (bool, optional, defaults to False) — If True, do not raise an error if branch already exists.

Raises

RepositoryNotFoundError or BadRequestError or HfHubHTTPError

  • RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
  • BadRequestError — If invalid reference for a branch. Ex: refs/pr/5 or ‘refs/foo/bar’.
  • HfHubHTTPError — If the branch already exists on the repo (error 409) and exist_ok is set to False.

Create a new branch for a repo on the Hub, starting from the specified revision (defaults to main). To find a revision suiting your needs, you can use list_repo_refs() or list_repo_commits().

create_collection

< >

( title: str namespace: Optional[str] = None description: Optional[str] = None private: bool = False exists_ok: bool = False token: Optional[str] = None )

Parameters

  • title (str) — Title of the collection to create. Example: "Recent models".
  • namespace (str, optional) — Namespace of the collection to create (username or org). Will default to the owner name.
  • description (str, optional) — Description of the collection to create.
  • private (bool, optional) — Whether the collection should be private or not. Defaults to False (i.e. public collection).
  • exists_ok (bool, optional) — If True, do not raise an error if collection already exists.
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Create a new Collection on the Hub.

Returns: Collection

Example:

>>> from huggingface_hub import create_collection
>>> collection = create_collection(
...     title="ICCV 2023",
...     description="Portfolio of models, papers and demos I presented at ICCV 2023",
... )
>>> collection.slug
"username/iccv-2023-64f9a55bb3115b4f513ec026"

create_commit

< >

( repo_id: str operations: Iterable[CommitOperation] commit_message: str commit_description: Optional[str] = None token: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None create_pr: Optional[bool] = None num_threads: int = 5 parent_commit: Optional[str] = None run_as_future: bool = False ) CommitInfo or Future

Parameters

  • repo_id (str) — The repository in which the commit will be created, for example: "username/custom_transformers"
  • operations (Iterable of CommitOperation()) — An iterable of operations to include in the commit, either:

    Operation objects will be mutated to include information relative to the upload. Do not reuse the same objects for multiple commits.

  • commit_message (str) — The summary (first line) of the commit that will be created.
  • commit_description (str, optional) — The description of the commit that will be created
  • token (str, optional) — Authentication token, obtained with HfApi.login method. Will default to the stored token.
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.
  • revision (str, optional) — The git revision to commit from. Defaults to the head of the "main" branch.
  • create_pr (boolean, optional) — Whether or not to create a Pull Request with that commit. Defaults to False. If revision is not set, PR is opened against the "main" branch. If revision is set and is a branch, PR is opened against this branch. If revision is set and is not a branch name (example: a commit oid), an RevisionNotFoundError is returned by the server.
  • num_threads (int, optional) — Number of concurrent threads for uploading files. Defaults to 5. Setting it to 2 means at most 2 files will be uploaded concurrently.
  • parent_commit (str, optional) — The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. If specified and create_pr is False, the commit will fail if revision does not point to parent_commit. If specified and create_pr is True, the pull request will be created from parent_commit. Specifying parent_commit ensures the repo has not changed before committing the changes, and can be especially useful if the repo is updated / committed to concurrently.
  • run_as_future (bool, optional) — Whether or not to run this method in the background. Background jobs are run sequentially without blocking the main thread. Passing run_as_future=True will return a Future object. Defaults to False.

Returns

CommitInfo or Future

Instance of CommitInfo containing information about the newly created commit (commit hash, commit url, pr url, commit message,…). If run_as_future=True is passed, returns a Future object which will contain the result when executed.

Raises

ValueError or RepositoryNotFoundError

  • ValueError — If commit message is empty.
  • ValueError — If parent commit is not a valid commit OID.
  • ValueError — If a README.md file with an invalid metadata section is committed. In this case, the commit will fail early, before trying to upload any file.
  • ValueError — If create_pr is True and revision is neither None nor "main".
  • RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.

Creates a commit in the given repo, deleting & uploading files as needed.

The input list of CommitOperation will be mutated during the commit process. Do not reuse the same objects for multiple commits.

create_commit assumes that the repo already exists on the Hub. If you get a Client error 404, please make sure you are authenticated and that repo_id and repo_type are set correctly. If repo does not exist, create it first using create_repo().

create_commit is limited to 25k LFS files and a 1GB payload for regular files.

create_commits_on_pr

< >

( repo_id: str addition_commits: List[List[CommitOperationAdd]] deletion_commits: List[List[CommitOperationDelete]] commit_message: str commit_description: Optional[str] = None token: Optional[str] = None repo_type: Optional[str] = None merge_pr: bool = True num_threads: int = 5 verbose: bool = False ) str

Parameters

  • repo_id (str) — The repository in which the commits will be pushed. Example: "username/my-cool-model".
  • addition_commits (List of List of CommitOperationAdd) — A list containing lists of CommitOperationAdd. Each sublist will result in a commit on the PR.

    deletion_commits — A list containing lists of CommitOperationDelete. Each sublist will result in a commit on the PR. Deletion commits are pushed before addition commits.

  • commit_message (str) — The summary (first line) of the commit that will be created. Will also be the title of the PR.
  • commit_description (str, optional) — The description of the commit that will be created. The description will be added to the PR.
  • token (str, optional) — Authentication token, obtained with HfApi.login method. Will default to the stored token.
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.
  • merge_pr (bool) — If set to True, the Pull Request is merged at the end of the process. Defaults to True.
  • num_threads (int, optional) — Number of concurrent threads for uploading files. Defaults to 5.
  • verbose (bool) — If set to True, process will run on verbose mode i.e. print information about the ongoing tasks. Defaults to False.

Returns

str

URL to the created PR.

Raises

MultiCommitException

  • MultiCommitException — If an unexpected issue occur in the process: empty commits, unexpected commits in a PR, unexpected PR description, etc.

Push changes to the Hub in multiple commits.

Commits are pushed to a draft PR branch. If the upload fails or gets interrupted, it can be resumed. Progress is tracked in the PR description. At the end of the process, the PR is set as open and the title is updated to match the initial commit message. If merge_pr=True is passed, the PR is merged automatically.

All deletion commits are pushed first, followed by the addition commits. The order of the commits is not guaranteed as we might implement parallel commits in the future. Be sure that your are not updating several times the same file.

create_commits_on_pr is experimental. Its API and behavior is subject to change in the future without prior notice.

create_commits_on_pr assumes that the repo already exists on the Hub. If you get a Client error 404, please make sure you are authenticated and that repo_id and repo_type are set correctly. If repo does not exist, create it first using create_repo().

Example:

>>> from huggingface_hub import HfApi, plan_multi_commits
>>> addition_commits, deletion_commits = plan_multi_commits(
...     operations=[
...          CommitOperationAdd(...),
...          CommitOperationAdd(...),
...          CommitOperationDelete(...),
...          CommitOperationDelete(...),
...          CommitOperationAdd(...),
...     ],
... )
>>> HfApi().create_commits_on_pr(
...     repo_id="my-cool-model",
...     addition_commits=addition_commits,
...     deletion_commits=deletion_commits,
...     (...)
...     verbose=True,
... )

create_discussion

< >

( repo_id: str title: str token: Optional[str] = None description: Optional[str] = None repo_type: Optional[str] = None pull_request: bool = False )

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • title (str) — The title of the discussion. It can be up to 200 characters long, and must be at least 3 characters long. Leading and trailing whitespaces will be stripped.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token)
  • description (str, optional) — An optional description for the Pull Request. Defaults to "Discussion opened with the huggingface_hub Python library"
  • pull_request (bool, optional) — Whether to create a Pull Request or discussion. If True, creates a Pull Request. If False, creates a discussion. Defaults to False.
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.

Creates a Discussion or Pull Request.

Pull Requests created programmatically will be in "draft" status.

Creating a Pull Request with changes can also be done at once with HfApi.create_commit().

Returns: DiscussionWithDetails

Raises the following errors:

  • HTTPError if the HuggingFace API returned an error
  • ValueError if some parameter value is invalid
  • RepositoryNotFoundError If the repository to download from cannot be found. This may be because it doesn’t exist, or because it is set to private and you do not have access.

create_inference_endpoint

< >

( name: str repository: str framework: str accelerator: str instance_size: str instance_type: str region: str vendor: str account_id: Optional[str] = None min_replica: int = 0 max_replica: int = 1 revision: Optional[str] = None task: Optional[str] = None custom_image: Optional[Dict] = None type: InferenceEndpointType = <InferenceEndpointType.PROTECTED: 'protected'> namespace: Optional[str] = None token: Optional[str] = None ) InferenceEndpoint

Parameters

  • name (str) — The unique name for the new Inference Endpoint.
  • repository (str) — The name of the model repository associated with the Inference Endpoint (e.g. "gpt2").
  • framework (str) — The machine learning framework used for the model (e.g. "custom").
  • accelerator (str) — The hardware accelerator to be used for inference (e.g. "cpu").
  • instance_size (str) — The size or type of the instance to be used for hosting the model (e.g. "large").
  • instance_type (str) — The cloud instance type where the Inference Endpoint will be deployed (e.g. "c6i").
  • region (str) — The cloud region in which the Inference Endpoint will be created (e.g. "us-east-1").
  • vendor (str) — The cloud provider or vendor where the Inference Endpoint will be hosted (e.g. "aws").
  • account_id (str, optional) — The account ID used to link a VPC to a private Inference Endpoint (if applicable).
  • min_replica (int, optional) — The minimum number of replicas (instances) to keep running for the Inference Endpoint. Defaults to 0.
  • max_replica (int, optional) — The maximum number of replicas (instances) to scale to for the Inference Endpoint. Defaults to 1.
  • revision (str, optional) — The specific model revision to deploy on the Inference Endpoint (e.g. "6c0e6080953db56375760c0471a8c5f2929baf11").
  • task (str, optional) — The task on which to deploy the model (e.g. "text-classification").
  • custom_image (Dict, optional) — A custom Docker image to use for the Inference Endpoint. This is useful if you want to deploy an Inference Endpoint running on the text-generation-inference (TGI) framework (see examples).
  • type ([`InferenceEndpointType], *optional*) -- The type of the Inference Endpoint, which can be “protected”(default),“public”or“private”`.
  • namespace (str, optional) — The namespace where the Inference Endpoint will be created. Defaults to the current user’s namespace.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token).

Returns

InferenceEndpoint

information about the updated Inference Endpoint.

Create a new Inference Endpoint.

Example:

>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> create_inference_endpoint(
...     "my-endpoint-name",
...     repository="gpt2",
...     framework="pytorch",
...     task="text-generation",
...     accelerator="cpu",
...     vendor="aws",
...     region="us-east-1",
...     type="protected",
...     instance_size="medium",
...     instance_type="c6i",
... )
>>> endpoint
InferenceEndpoint(name='my-endpoint-name', status="pending",...)

# Run inference on the endpoint
>>> endpoint.client.text_generation(...)
"..."
# Start an Inference Endpoint running Zephyr-7b-beta on TGI
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> create_inference_endpoint(
...     "aws-zephyr-7b-beta-0486",
...     repository="HuggingFaceH4/zephyr-7b-beta",
...     framework="pytorch",
...     task="text-generation",
...     accelerator="gpu",
...     vendor="aws",
...     region="us-east-1",
...     type="protected",
...     instance_size="medium",
...     instance_type="g5.2xlarge",
...     custom_image={
...         "health_route": "/health",
...         "env": {
...             "MAX_BATCH_PREFILL_TOKENS": "2048",
...             "MAX_INPUT_LENGTH": "1024",
...             "MAX_TOTAL_TOKENS": "1512",
...             "MODEL_ID": "/repository"
...         },
...         "url": "ghcr.io/huggingface/text-generation-inference:1.1.0",
...     },
... )

create_pull_request

< >

( repo_id: str title: str token: Optional[str] = None description: Optional[str] = None repo_type: Optional[str] = None )

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • title (str) — The title of the discussion. It can be up to 200 characters long, and must be at least 3 characters long. Leading and trailing whitespaces will be stripped.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token)
  • description (str, optional) — An optional description for the Pull Request. Defaults to "Discussion opened with the huggingface_hub Python library"
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.

Creates a Pull Request . Pull Requests created programmatically will be in "draft" status.

Creating a Pull Request with changes can also be done at once with HfApi.create_commit();

This is a wrapper around HfApi.create_discussion().

Returns: DiscussionWithDetails

Raises the following errors:

  • HTTPError if the HuggingFace API returned an error
  • ValueError if some parameter value is invalid
  • RepositoryNotFoundError If the repository to download from cannot be found. This may be because it doesn’t exist, or because it is set to private and you do not have access.

create_repo

< >

( repo_id: str token: Optional[str] = None private: bool = False repo_type: Optional[str] = None exist_ok: bool = False space_sdk: Optional[str] = None space_hardware: Optional[SpaceHardware] = None space_storage: Optional[SpaceStorage] = None space_sleep_time: Optional[int] = None space_secrets: Optional[List[Dict[str, str]]] = None space_variables: Optional[List[Dict[str, str]]] = None ) RepoUrl

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token)
  • private (bool, optional, defaults to False) — Whether the model repo should be private.
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.
  • exist_ok (bool, optional, defaults to False) — If True, do not raise an error if repo already exists.
  • space_sdk (str, optional) — Choice of SDK to use if repo_type is “space”. Can be “streamlit”, “gradio”, “docker”, or “static”.
  • space_hardware (SpaceHardware or str, optional) — Choice of Hardware if repo_type is “space”. See SpaceHardware for a complete list.
  • space_storage (SpaceStorage or str, optional) — Choice of persistent storage tier. Example: "small". See SpaceStorage for a complete list.
  • space_sleep_time (int, optional) — Number of seconds of inactivity to wait before a Space is put to sleep. Set to -1 if you don’t want your Space to sleep (default behavior for upgraded hardware). For free hardware, you can’t configure the sleep time (value is fixed to 48 hours of inactivity). See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details.
  • space_secrets (List[Dict[str, str]], optional) — A list of secret keys to set in your Space. Each item is in the form {"key": ..., "value": ..., "description": ...} where description is optional. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.
  • space_variables (List[Dict[str, str]], optional) — A list of public environment variables to set in your Space. Each item is in the form {"key": ..., "value": ..., "description": ...} where description is optional. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables.

Returns

RepoUrl

URL to the newly created repo. Value is a subclass of str containing attributes like endpoint, repo_type and repo_id.

Create an empty repo on the HuggingFace Hub.

create_tag

< >

( repo_id: str tag: str tag_message: Optional[str] = None revision: Optional[str] = None token: Optional[str] = None repo_type: Optional[str] = None exist_ok: bool = False )

Parameters

  • repo_id (str) — The repository in which a commit will be tagged. Example: "user/my-cool-model".
  • tag (str) — The name of the tag to create.
  • tag_message (str, optional) — The description of the tag to create.
  • revision (str, optional) — The git revision to tag. It can be a branch name or the OID/SHA of a commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. Defaults to the head of the "main" branch.
  • token (str, optional) — Authentication token. Will default to the stored token.
  • repo_type (str, optional) — Set to "dataset" or "space" if tagging a dataset or space, None or "model" if tagging a model. Default is None.
  • exist_ok (bool, optional, defaults to False) — If True, do not raise an error if tag already exists.

Raises

RepositoryNotFoundError or RevisionNotFoundError or HfHubHTTPError

  • RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
  • RevisionNotFoundError — If revision is not found (error 404) on the repo.
  • HfHubHTTPError — If the branch already exists on the repo (error 409) and exist_ok is set to False.

Tag a given commit of a repo on the Hub.

dataset_info

< >

( repo_id: str revision: Optional[str] = None timeout: Optional[float] = None files_metadata: bool = False token: Optional[Union[bool, str]] = None ) hf_api.DatasetInfo

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • revision (str, optional) — The revision of the dataset repository from which to get the information.
  • timeout (float, optional) — Whether to set a timeout for the request to the Hub.
  • files_metadata (bool, optional) — Whether or not to retrieve metadata for files in the repository (size, LFS metadata, etc). Defaults to False.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.

Returns

hf_api.DatasetInfo

The dataset repository information.

Get info on one specific dataset on huggingface.co.

Dataset can be private if you pass an acceptable token.

Raises the following errors:

  • RepositoryNotFoundError If the repository to download from cannot be found. This may be because it doesn’t exist, or because it is set to private and you do not have access.
  • RevisionNotFoundError If the revision to download from cannot be found.

delete_branch

< >

( repo_id: str branch: str token: Optional[str] = None repo_type: Optional[str] = None )

Parameters

  • repo_id (str) — The repository in which a branch will be deleted. Example: "user/my-cool-model".
  • branch (str) — The name of the branch to delete.
  • token (str, optional) — Authentication token. Will default to the stored token.
  • repo_type (str, optional) — Set to "dataset" or "space" if creating a branch on a dataset or space, None or "model" if tagging a model. Default is None.

Raises

RepositoryNotFoundError or HfHubHTTPError

  • RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
  • HfHubHTTPError — If trying to delete a protected branch. Ex: main cannot be deleted.
  • HfHubHTTPError — If trying to delete a branch that does not exist.

Delete a branch from a repo on the Hub.

delete_collection

< >

( collection_slug: str missing_ok: bool = False token: Optional[str] = None )

Parameters

  • collection_slug (str) — Slug of the collection to delete. Example: "TheBloke/recent-models-64f9a55bb3115b4f513ec026".
  • missing_ok (bool, optional) — If True, do not raise an error if collection doesn’t exists.
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Delete a collection on the Hub.

Example:

>>> from huggingface_hub import delete_collection
>>> collection = delete_collection("username/useless-collection-64f9a55bb3115b4f513ec026", missing_ok=True)

This is a non-revertible action. A deleted collection cannot be restored.

delete_collection_item

< >

( collection_slug: str item_object_id: str missing_ok: bool = False token: Optional[str] = None )

Parameters

  • collection_slug (str) — Slug of the collection to update. Example: "TheBloke/recent-models-64f9a55bb3115b4f513ec026".
  • item_object_id (str) — ID of the item in the collection. This is not the id of the item on the Hub (repo_id or paper id). It must be retrieved from a CollectionItem object. Example: collection.items[0]._id.
  • missing_ok (bool, optional) — If True, do not raise an error if item doesn’t exists.
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Delete an item from a collection.

Example:

>>> from huggingface_hub import get_collection, delete_collection_item

# Get collection first
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")

# Delete item based on its ID
>>> delete_collection_item(
...     collection_slug="TheBloke/recent-models-64f9a55bb3115b4f513ec026",
...     item_object_id=collection.items[-1].item_object_id,
... )

delete_file

< >

( path_in_repo: str repo_id: str token: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None commit_message: Optional[str] = None commit_description: Optional[str] = None create_pr: Optional[bool] = None parent_commit: Optional[str] = None )

Parameters

  • path_in_repo (str) — Relative filepath in the repo, for example: "checkpoints/1fec34a/weights.bin"
  • repo_id (str) — The repository from which the file will be deleted, for example: "username/custom_transformers"
  • token (str, optional) — Authentication token, obtained with HfApi.login method. Will default to the stored token.
  • repo_type (str, optional) — Set to "dataset" or "space" if the file is in a dataset or space, None or "model" if in a model. Default is None.
  • revision (str, optional) — The git revision to commit from. Defaults to the head of the "main" branch.
  • commit_message (str, optional) — The summary / title / first line of the generated commit. Defaults to f"Delete {path_in_repo} with huggingface_hub".
  • commit_description (str optional) — The description of the generated commit
  • create_pr (boolean, optional) — Whether or not to create a Pull Request with that commit. Defaults to False. If revision is not set, PR is opened against the "main" branch. If revision is set and is a branch, PR is opened against this branch. If revision is set and is not a branch name (example: a commit oid), an RevisionNotFoundError is returned by the server.
  • parent_commit (str, optional) — The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. If specified and create_pr is False, the commit will fail if revision does not point to parent_commit. If specified and create_pr is True, the pull request will be created from parent_commit. Specifying parent_commit ensures the repo has not changed before committing the changes, and can be especially useful if the repo is updated / committed to concurrently.

Deletes a file in the given repo.

Raises the following errors:

delete_folder

< >

( path_in_repo: str repo_id: str token: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None commit_message: Optional[str] = None commit_description: Optional[str] = None create_pr: Optional[bool] = None parent_commit: Optional[str] = None )

Parameters

  • path_in_repo (str) — Relative folder path in the repo, for example: "checkpoints/1fec34a".
  • repo_id (str) — The repository from which the folder will be deleted, for example: "username/custom_transformers"
  • token (str, optional) — Authentication token, obtained with HfApi.login method. Will default to the stored token.
  • repo_type (str, optional) — Set to "dataset" or "space" if the folder is in a dataset or space, None or "model" if in a model. Default is None.
  • revision (str, optional) — The git revision to commit from. Defaults to the head of the "main" branch.
  • commit_message (str, optional) — The summary / title / first line of the generated commit. Defaults to f"Delete folder {path_in_repo} with huggingface_hub".
  • commit_description (str optional) — The description of the generated commit.
  • create_pr (boolean, optional) — Whether or not to create a Pull Request with that commit. Defaults to False. If revision is not set, PR is opened against the "main" branch. If revision is set and is a branch, PR is opened against this branch. If revision is set and is not a branch name (example: a commit oid), an RevisionNotFoundError is returned by the server.
  • parent_commit (str, optional) — The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. If specified and create_pr is False, the commit will fail if revision does not point to parent_commit. If specified and create_pr is True, the pull request will be created from parent_commit. Specifying parent_commit ensures the repo has not changed before committing the changes, and can be especially useful if the repo is updated / committed to concurrently.

Deletes a folder in the given repo.

Simple wrapper around create_commit() method.

delete_inference_endpoint

< >

( name: str namespace: Optional[str] = None token: Optional[str] = None )

Parameters

  • name (str) — The name of the Inference Endpoint to delete.
  • namespace (str, optional) — The namespace in which the Inference Endpoint is located. Defaults to the current user.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token).

Delete an Inference Endpoint.

This operation is not reversible. If you don’t want to be charged for an Inference Endpoint, it is preferable to pause it with pause_inference_endpoint() or scale it to zero with scale_to_zero_inference_endpoint().

For convenience, you can also delete an Inference Endpoint using InferenceEndpoint.delete().

delete_repo

< >

( repo_id: str token: Optional[str] = None repo_type: Optional[str] = None missing_ok: bool = False )

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token)
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model.
  • missing_ok (bool, optional, defaults to False) — If True, do not raise an error if repo does not exist.

Raises

    • RepositoryNotFoundError If the repository to delete from cannot be found and missing_ok is set to False (default).

Delete a repo from the HuggingFace Hub. CAUTION: this is irreversible.

delete_space_secret

< >

( repo_id: str key: str token: Optional[str] = None )

Parameters

  • repo_id (str) — ID of the repo to update. Example: "bigcode/in-the-stack".
  • key (str) — Secret key. Example: "GITHUB_API_KEY".
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Deletes a secret from a Space.

Secrets allow to set secret keys or tokens to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.

delete_space_storage

< >

( repo_id: str token: Optional[str] = None ) SpaceRuntime

Parameters

  • repo_id (str) — ID of the Space to update. Example: "HuggingFaceH4/open_llm_leaderboard".
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Returns

SpaceRuntime

Runtime information about a Space including Space stage and hardware.

Raises

BadRequestError

  • BadRequestError — If space has no persistent storage.

Delete persistent storage for a Space.

delete_space_variable

< >

( repo_id: str key: str token: Optional[str] = None )

Parameters

  • repo_id (str) — ID of the repo to update. Example: "bigcode/in-the-stack".
  • key (str) — Variable key. Example: "MODEL_REPO_ID"
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Deletes a variable from a Space.

Variables allow to set environment variables to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables

delete_tag

< >

( repo_id: str tag: str token: Optional[str] = None repo_type: Optional[str] = None )

Parameters

  • repo_id (str) — The repository in which a tag will be deleted. Example: "user/my-cool-model".
  • tag (str) — The name of the tag to delete.
  • token (str, optional) — Authentication token. Will default to the stored token.
  • repo_type (str, optional) — Set to "dataset" or "space" if tagging a dataset or space, None or "model" if tagging a model. Default is None.

Raises

RepositoryNotFoundError or RevisionNotFoundError

Delete a tag from a repo on the Hub.

duplicate_space

< >

( from_id: str to_id: Optional[str] = None private: Optional[bool] = None token: Optional[str] = None exist_ok: bool = False hardware: Optional[SpaceHardware] = None storage: Optional[SpaceStorage] = None sleep_time: Optional[int] = None secrets: Optional[List[Dict[str, str]]] = None variables: Optional[List[Dict[str, str]]] = None ) RepoUrl

Parameters

  • from_id (str) — ID of the Space to duplicate. Example: "pharma/CLIP-Interrogator".
  • to_id (str, optional) — ID of the new Space. Example: "dog/CLIP-Interrogator". If not provided, the new Space will have the same name as the original Space, but in your account.
  • private (bool, optional) — Whether the new Space should be private or not. Defaults to the same privacy as the original Space.
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.
  • exist_ok (bool, optional, defaults to False) — If True, do not raise an error if repo already exists.
  • hardware (SpaceHardware or str, optional) — Choice of Hardware. Example: "t4-medium". See SpaceHardware for a complete list.
  • storage (SpaceStorage or str, optional) — Choice of persistent storage tier. Example: "small". See SpaceStorage for a complete list.
  • sleep_time (int, optional) — Number of seconds of inactivity to wait before a Space is put to sleep. Set to -1 if you don’t want your Space to sleep (default behavior for upgraded hardware). For free hardware, you can’t configure the sleep time (value is fixed to 48 hours of inactivity). See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details.
  • secrets (List[Dict[str, str]], optional) — A list of secret keys to set in your Space. Each item is in the form {"key": ..., "value": ..., "description": ...} where description is optional. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.
  • variables (List[Dict[str, str]], optional) — A list of public environment variables to set in your Space. Each item is in the form {"key": ..., "value": ..., "description": ...} where description is optional. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables.

Returns

RepoUrl

URL to the newly created repo. Value is a subclass of str containing attributes like endpoint, repo_type and repo_id.

Raises

    • HTTPError if the HuggingFace API returned an error
    • RepositoryNotFoundError If one of from_id or to_id cannot be found. This may be because it doesn’t exist, or because it is set to private and you do not have access.

Duplicate a Space.

Programmatically duplicate a Space. The new Space will be created in your account and will be in the same state as the original Space (running or paused). You can duplicate a Space no matter the current state of a Space.

Example:

>>> from huggingface_hub import duplicate_space

# Duplicate a Space to your account
>>> duplicate_space("multimodalart/dreambooth-training")
RepoUrl('https://huggingface.co/spaces/nateraw/dreambooth-training',...)

# Can set custom destination id and visibility flag.
>>> duplicate_space("multimodalart/dreambooth-training", to_id="my-dreambooth", private=True)
RepoUrl('https://huggingface.co/spaces/nateraw/my-dreambooth',...)

edit_discussion_comment

< >

( repo_id: str discussion_num: int comment_id: str new_content: str token: Optional[str] = None repo_type: Optional[str] = None ) DiscussionComment

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • discussion_num (int) — The number of the Discussion or Pull Request . Must be a strictly positive integer.
  • comment_id (str) — The ID of the comment to edit.
  • new_content (str) — The new content of the comment. Comments support markdown formatting.
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token)

Returns

DiscussionComment

the edited comment

Edits a comment on a Discussion / Pull Request.

Raises the following errors:

  • HTTPError if the HuggingFace API returned an error
  • ValueError if some parameter value is invalid
  • RepositoryNotFoundError If the repository to download from cannot be found. This may be because it doesn’t exist, or because it is set to private and you do not have access.

file_exists

< >

( repo_id: str filename: str repo_type: Optional[str] = None revision: Optional[str] = None token: Union[str, bool, None] = None )

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • filename (str) — The name of the file to check, for example: "config.json"
  • repo_type (str, optional) — Set to "dataset" or "space" if getting repository info from a dataset or a space, None or "model" if getting repository info from a model. Default is None.
  • revision (str, optional) — The revision of the repository from which to get the information. Defaults to "main" branch.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.

Checks if a file exists in a repository on the Hugging Face Hub.

Examples:

>>> from huggingface_hub import file_exists
>>> file_exists("bigcode/starcoder", "config.json")
True
>>> file_exists("bigcode/starcoder", "not-a-file")
False
>>> file_exists("bigcode/not-a-repo", "config.json")
False

get_collection

< >

( collection_slug: str token: Optional[str] = None )

Parameters

  • collection_slug (str) — Slug of the collection of the Hub. Example: "TheBloke/recent-models-64f9a55bb3115b4f513ec026".
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Gets information about a Collection on the Hub.

Returns: Collection

Example:

>>> from huggingface_hub import get_collection
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")
>>> collection.title
'Recent models'
>>> len(collection.items)
37
>>> collection.items[0]
CollectionItem(
    item_object_id='651446103cd773a050bf64c2',
    item_id='TheBloke/U-Amethyst-20B-AWQ',
    item_type='model',
    position=88,
    note=None
)

get_dataset_tags

< >

( )

List all valid dataset tags as a nested namespace object.

get_discussion_details

< >

( repo_id: str discussion_num: int repo_type: Optional[str] = None token: Optional[str] = None )

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • discussion_num (int) — The number of the Discussion or Pull Request . Must be a strictly positive integer.
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token)

Fetches a Discussion’s / Pull Request ‘s details from the Hub.

Returns: DiscussionWithDetails

Raises the following errors:

  • HTTPError if the HuggingFace API returned an error
  • ValueError if some parameter value is invalid
  • RepositoryNotFoundError If the repository to download from cannot be found. This may be because it doesn’t exist, or because it is set to private and you do not have access.

get_full_repo_name

< >

( model_id: str organization: Optional[str] = None token: Optional[Union[bool, str]] = None ) str

Parameters

  • model_id (str) — The name of the model.
  • organization (str, optional) — If passed, the repository name will be in the organization namespace instead of the user namespace.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.

Returns

str

The repository name in the user’s namespace ({username}/{model_id}) if no organization is passed, and under the organization namespace ({organization}/{model_id}) otherwise.

Returns the repository name for a given model ID and optional organization.

get_hf_file_metadata

< >

( url: str token: Union[bool, str, None] = None proxies: Optional[Dict] = None timeout: Optional[float] = 10 )

Parameters

  • url (str) — File url, for example returned by hf_hub_url().
  • token (str or bool, optional) — A token to be used for the download.
    • If True, the token is read from the HuggingFace config folder.
    • If False or None, no token is provided.
    • If a string, it’s used as the authentication token.
  • proxies (dict, optional) — Dictionary mapping protocol to the URL of the proxy passed to requests.request.
  • timeout (float, optional, defaults to 10) — How many seconds to wait for the server to send metadata before giving up.

Fetch metadata of a file versioned on the Hub for a given url.

get_inference_endpoint

< >

( name: str namespace: Optional[str] = None token: Optional[str] = None ) InferenceEndpoint

Parameters

  • name (str) — The name of the Inference Endpoint to retrieve information about.
  • namespace (str, optional) — The namespace in which the Inference Endpoint is located. Defaults to the current user.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token).

Returns

InferenceEndpoint

information about the requested Inference Endpoint.

Get information about an Inference Endpoint.

Example:

>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> endpoint = api.get_inference_endpoint("my-text-to-image")
>>> endpoint
InferenceEndpoint(name='my-text-to-image', ...)

# Get status
>>> endpoint.status
'running'
>>> endpoint.url
'https://my-text-to-image.region.vendor.endpoints.huggingface.cloud'

# Run inference
>>> endpoint.client.text_to_image(...)

get_model_tags

< >

( )

List all valid model tags as a nested namespace object

get_paths_info

< >

( repo_id: str paths: Union[List[str], str] expand: bool = False revision: Optional[str] = None repo_type: Optional[str] = None token: Optional[Union[bool, str]] = None ) List[Union[RepoFile, RepoFolder]]

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • paths (Union[List[str], str], optional) — The paths to get information about. If a path do not exist, it is ignored without raising an exception.
  • expand (bool, optional, defaults to False) — Whether to fetch more information about the paths (e.g. last commit and files’ security scan results). This operation is more expensive for the server so only 50 results are returned per page (instead of 1000). As pagination is implemented in huggingface_hub, this is transparent for you except for the time it takes to get the results.
  • revision (str, optional) — The revision of the repository from which to get the information. Defaults to "main" branch.
  • repo_type (str, optional) — The type of the repository from which to get the information ("model", "dataset" or "space". Defaults to "model".
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.

Returns

List[Union[RepoFile, RepoFolder]]

The information about the paths, as a list of RepoFile and RepoFolder objects.

Raises

RepositoryNotFoundError or RevisionNotFoundError

  • RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
  • RevisionNotFoundError — If revision is not found (error 404) on the repo.

Get information about a repo’s paths.

Example:

>>> from huggingface_hub import get_paths_info
>>> paths_info = get_paths_info("allenai/c4", ["README.md", "en"], repo_type="dataset")
>>> paths_info
[
    RepoFile(path='README.md', size=2379, blob_id='f84cb4c97182890fc1dbdeaf1a6a468fd27b4fff', lfs=None, last_commit=None, security=None),
    RepoFolder(path='en', tree_id='dc943c4c40f53d02b31ced1defa7e5f438d5862e', last_commit=None)
]

get_repo_discussions

< >

( repo_id: str author: Optional[str] = None discussion_type: Optional[DiscussionTypeFilter] = None discussion_status: Optional[DiscussionStatusFilter] = None repo_type: Optional[str] = None token: Optional[str] = None ) Iterator[Discussion]

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • author (str, optional) — Pass a value to filter by discussion author. None means no filter. Default is None.
  • discussion_type (str, optional) — Set to "pull_request" to fetch only pull requests, "discussion" to fetch only discussions. Set to "all" or None to fetch both. Default is None.
  • discussion_status (str, optional) — Set to "open" (respectively "closed") to fetch only open (respectively closed) discussions. Set to "all" or None to fetch both. Default is None.
  • repo_type (str, optional) — Set to "dataset" or "space" if fetching from a dataset or space, None or "model" if fetching from a model. Default is None.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token).

Returns

Iterator[Discussion]

An iterator of Discussion objects.

Fetches Discussions and Pull Requests for the given repo.

Example:

Collecting all discussions of a repo in a list:

>>> from huggingface_hub import get_repo_discussions
>>> discussions_list = list(get_repo_discussions(repo_id="bert-base-uncased"))

Iterating over discussions of a repo:

>>> from huggingface_hub import get_repo_discussions
>>> for discussion in get_repo_discussions(repo_id="bert-base-uncased"):
...     print(discussion.num, discussion.title)

get_safetensors_metadata

< >

( repo_id: str repo_type: Optional[str] = None revision: Optional[str] = None token: Optional[str] = None ) SafetensorsRepoMetadata

Parameters

  • repo_id (str) — A user or an organization name and a repo name separated by a /.
  • filename (str) — The name of the file in the repo.
  • repo_type (str, optional) — Set to "dataset" or "space" if the file is in a dataset or space, None or "model" if in a model. Default is None.
  • revision (str, optional) — The git revision to fetch the file from. Can be a branch name, a tag, or a commit hash. Defaults to the head of the "main" branch.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.

Returns

SafetensorsRepoMetadata

information related to safetensors repo.

Raises

    • NotASafetensorsRepoError: if the repo is not a safetensors repo i.e. doesn’t have either a model.safetensors or a model.safetensors.index.json file.
    • SafetensorsParsingError: if a safetensors file header couldn’t be parsed correctly.

Parse metadata for a safetensors repo on the Hub.

We first check if the repo has a single safetensors file or a sharded safetensors repo. If it’s a single safetensors file, we parse the metadata from this file. If it’s a sharded safetensors repo, we parse the metadata from the index file and then parse the metadata from each shard.

To parse metadata from a single safetensors file, use parse_safetensors_file_metadata().

For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.

Example:

# Parse repo with single weights file
>>> metadata = get_safetensors_metadata("bigscience/bloomz-560m")
>>> metadata
SafetensorsRepoMetadata(
    metadata=None,
    sharded=False,
    weight_map={'h.0.input_layernorm.bias': 'model.safetensors', ...},
    files_metadata={'model.safetensors': SafetensorsFileMetadata(...)}
)
>>> metadata.files_metadata["model.safetensors"].metadata
{'format': 'pt'}

# Parse repo with sharded model
>>> metadata = get_safetensors_metadata("bigscience/bloom")
Parse safetensors files: 100%|██████████████████████████████████████████| 72/72 [00:12<00:00,  5.78it/s]
>>> metadata
SafetensorsRepoMetadata(metadata={'total_size': 352494542848}, sharded=True, weight_map={...}, files_metadata={...})
>>> len(metadata.files_metadata)
72  # All safetensors files have been fetched

# Parse repo with sharded model
>>> get_safetensors_metadata("runwayml/stable-diffusion-v1-5")
NotASafetensorsRepoError: 'runwayml/stable-diffusion-v1-5' is not a safetensors repo. Couldn't find 'model.safetensors.index.json' or 'model.safetensors' files.

get_space_runtime

< >

( repo_id: str token: Optional[str] = None ) SpaceRuntime

Parameters

  • repo_id (str) — ID of the repo to update. Example: "bigcode/in-the-stack".
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Returns

SpaceRuntime

Runtime information about a Space including Space stage and hardware.

Gets runtime information about a Space.

get_space_variables

< >

( repo_id: str token: Optional[str] = None )

Parameters

  • repo_id (str) — ID of the repo to query. Example: "bigcode/in-the-stack".
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Gets all variables from a Space.

Variables allow to set environment variables to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables

get_token_permission

< >

( token: Optional[str] = None ) Literal["read", "write", None]

Parameters

  • token (str, optional) — The token to check for validity. Defaults to the one saved locally.

Returns

Literal["read", "write", None]

Permission granted by the token (“read” or “write”). Returns None if no token passed or token is invalid.

Check if a given token is valid and return its permissions.

For more details about tokens, please refer to https://huggingface.co/docs/hub/security-tokens#what-are-user-access-tokens.

get_user_overview

< >

( username: str ) User

Parameters

  • username (str) — Username of the user to get an overview of.

Returns

User

A User object with the user’s overview.

Raises

HTTPError

  • HTTPError — HTTP 404 If the user does not exist on the Hub.

Get an overview of a user on the Hub.

grant_access

< >

( repo_id: str user: str repo_type: Optional[str] = None token: Optional[str] = None )

Parameters

  • repo_id (str) — The id of the repo to grant access to.
  • user (str) — The username of the user to grant access.
  • repo_type (str, optional) — The type of the repo to grant access to. Must be one of model, dataset or space. Defaults to model.
  • token (str, optional) — A valid authentication token (see https://huggingface.co/settings/token).

Raises

HTTPError

  • HTTPError — HTTP 400 if the repo is not gated.
  • HTTPError — HTTP 400 if the user already has access to the repo.
  • HTTPError — HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t have write or admin role in the organization the repo belongs to or if you passed a read token.
  • HTTPError — HTTP 404 if the user does not exist on the Hub.

Grant access to a user for a given gated repo.

Granting access don’t require for the user to send an access request by themselves. The user is automatically added to the accepted list meaning they can download the files You can revoke the granted access at any time using cancel_access_request() or reject_access_request().

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.

hf_hub_download

< >

( repo_id: str filename: str subfolder: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None cache_dir: Union[str, Path, None] = None local_dir: Union[str, Path, None] = None local_dir_use_symlinks: Union[bool, Literal['auto']] = 'auto' force_download: bool = False force_filename: Optional[str] = None proxies: Optional[Dict] = None etag_timeout: float = 10 resume_download: bool = False token: Optional[Union[str, bool]] = None local_files_only: bool = False legacy_cache_layout: bool = False )

Parameters

  • repo_id (str) — A user or an organization name and a repo name separated by a /.
  • filename (str) — The name of the file in the repo.
  • subfolder (str, optional) — An optional value corresponding to a folder inside the model repo.
  • repo_type (str, optional) — Set to "dataset" or "space" if downloading from a dataset or space, None or "model" if downloading from a model. Default is None.
  • revision (str, optional) — An optional Git revision id which can be a branch name, a tag, or a commit hash.
  • cache_dir (str, Path, optional) — Path to the folder where cached files are stored.
  • local_dir (str or Path, optional) — If provided, the downloaded file will be placed under this directory, either as a symlink (default) or a regular file (see description for more details).
  • local_dir_use_symlinks ("auto" or bool, defaults to "auto") — To be used with local_dir. If set to “auto”, the cache directory will be used and the file will be either duplicated or symlinked to the local directory depending on its size. It set to True, a symlink will be created, no matter the file size. If set to False, the file will either be duplicated from cache (if already exists) or downloaded from the Hub and not cached. See description for more details.
  • force_download (bool, optional, defaults to False) — Whether the file should be downloaded even if it already exists in the local cache.
  • proxies (dict, optional) — Dictionary mapping protocol to the URL of the proxy passed to requests.request.
  • etag_timeout (float, optional, defaults to 10) — When fetching ETag, how many seconds to wait for the server to send data before giving up which is passed to requests.request.
  • resume_download (bool, optional, defaults to False) — If True, resume a previously interrupted download.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.
  • local_files_only (bool, optional, defaults to False) — If True, avoid downloading the file and return the path to the local cached file if it exists.
  • legacy_cache_layout (bool, optional, defaults to False) — If True, uses the legacy file cache layout i.e. just call hf_hub_url() then cached_download. This is deprecated as the new cache layout is more powerful.

Download a given file if it’s not already present in the local cache.

The new cache file layout looks like this:

  • The cache directory contains one subfolder per repo_id (namespaced by repo type)
  • inside each repo folder:
    • refs is a list of the latest known revision => commit_hash pairs
    • blobs contains the actual file blobs (identified by their git-sha or sha256, depending on whether they’re LFS files or not)
    • snapshots contains one subfolder per commit, each “commit” contains the subset of the files that have been resolved at that particular commit. Each filename is a symlink to the blob at that particular commit.

If local_dir is provided, the file structure from the repo will be replicated in this location. You can configure how you want to move those files:

  • If local_dir_use_symlinks="auto" (default), files are downloaded and stored in the cache directory as blob files. Small files (<5MB) are duplicated in local_dir while a symlink is created for bigger files. The goal is to be able to manually edit and save small files without corrupting the cache while saving disk space for binary files. The 5MB threshold can be configured with the HF_HUB_LOCAL_DIR_AUTO_SYMLINK_THRESHOLD environment variable.
  • If local_dir_use_symlinks=True, files are downloaded, stored in the cache directory and symlinked in local_dir. This is optimal in term of disk usage but files must not be manually edited.
  • If local_dir_use_symlinks=False and the blob files exist in the cache directory, they are duplicated in the local dir. This means disk usage is not optimized.
  • Finally, if local_dir_use_symlinks=False and the blob files do not exist in the cache directory, then the files are downloaded and directly placed under local_dir. This means if you need to download them again later, they will be re-downloaded entirely.
[  96]  .
└── [ 160]  models--julien-c--EsperBERTo-small
    ├── [ 160]  blobs
    │   ├── [321M]  403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
    │   ├── [ 398]  7cb18dc9bafbfcf74629a4b760af1b160957a83e
    │   └── [1.4K]  d7edf6bd2a681fb0175f7735299831ee1b22b812
    ├── [  96]  refs
    │   └── [  40]  main
    └── [ 128]  snapshots
        ├── [ 128]  2439f60ef33a0d46d85da5001d52aeda5b00ce9f
        │   ├── [  52]  README.md -> ../../blobs/d7edf6bd2a681fb0175f7735299831ee1b22b812
        │   └── [  76]  pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
        └── [ 128]  bbc77c8132af1cc5cf678da3f1ddf2de43606d48
            ├── [  52]  README.md -> ../../blobs/7cb18dc9bafbfcf74629a4b760af1b160957a83e
            └── [  76]  pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd

Raises the following errors:

hide_discussion_comment

< >

( repo_id: str discussion_num: int comment_id: str token: Optional[str] = None repo_type: Optional[str] = None ) DiscussionComment

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • discussion_num (int) — The number of the Discussion or Pull Request . Must be a strictly positive integer.
  • comment_id (str) — The ID of the comment to edit.
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token)

Returns

DiscussionComment

the hidden comment

Hides a comment on a Discussion / Pull Request.

Hidden comments' content cannot be retrieved anymore. Hiding a comment is irreversible.

Raises the following errors:

  • HTTPError if the HuggingFace API returned an error
  • ValueError if some parameter value is invalid
  • RepositoryNotFoundError If the repository to download from cannot be found. This may be because it doesn’t exist, or because it is set to private and you do not have access.

like

< >

( repo_id: str token: Optional[str] = None repo_type: Optional[str] = None )

Parameters

  • repo_id (str) — The repository to like. Example: "user/my-cool-model".
  • token (str, optional) — Authentication token. Will default to the stored token.
  • repo_type (str, optional) — Set to "dataset" or "space" if liking a dataset or space, None or "model" if liking a model. Default is None.

Raises

RepositoryNotFoundError

  • RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.

Like a given repo on the Hub (e.g. set as favorite).

See also unlike() and list_liked_repos().

Example:

>>> from huggingface_hub import like, list_liked_repos, unlike
>>> like("gpt2")
>>> "gpt2" in list_liked_repos().models
True
>>> unlike("gpt2")
>>> "gpt2" in list_liked_repos().models
False

list_accepted_access_requests

< >

( repo_id: str repo_type: Optional[str] = None token: Optional[str] = None ) List[AccessRequest]

Parameters

  • repo_id (str) — The id of the repo to get access requests for.
  • repo_type (str, optional) — The type of the repo to get access requests for. Must be one of model, dataset or space. Defaults to model.
  • token (str, optional) — A valid authentication token (see https://huggingface.co/settings/token).

Returns

List[AccessRequest]

A list of AccessRequest objects. Each time contains a username, email, status and timestamp attribute. If the gated repo has a custom form, the fields attribute will be populated with user’s answers.

Raises

HTTPError

  • HTTPError — HTTP 400 if the repo is not gated.
  • HTTPError — HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t have write or admin role in the organization the repo belongs to or if you passed a read token.

Get accepted access requests for a given gated repo.

An accepted request means the user has requested access to the repo and the request has been accepted. The user can download any file of the repo. If the approval mode is automatic, this list should contains by default all requests. Accepted requests can be cancelled or rejected at any time using cancel_access_request() and reject_access_request(). A cancelled request will go back to the pending list while a rejected request will go to the rejected list. In both cases, the user will lose access to the repo.

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.

Example:

>>> from huggingface_hub import list_accepted_access_requests

>>> requests = list_accepted_access_requests("meta-llama/Llama-2-7b")
>>> len(requests)
411
>>> requests[0]
[
    AccessRequest(
        username='clem',
        fullname='Clem 🤗',
        email='***',
        timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
        status='accepted',
        fields=None,
    ),
    ...
]

list_collections

< >

( owner: Union[List[str], str, None] = None item: Union[List[str], str, None] = None sort: Optional[Literal['lastModified', 'trending', 'upvotes']] = None limit: Optional[int] = None token: Optional[Union[bool, str]] = None ) Iterable[Collection]

Parameters

  • owner (List[str] or str, optional) — Filter by owner’s username.
  • item (List[str] or str, optional) — Filter collections containing a particular items. Example: "models/teknium/OpenHermes-2.5-Mistral-7B", "datasets/squad" or "papers/2311.12983".
  • sort (Literal["lastModified", "trending", "upvotes"], optional) — Sort collections by last modified, trending or upvotes.
  • limit (int, optional) — Maximum number of collections to be returned.
  • token (bool or str, optional) — An authentication token (see https://huggingface.co/settings/token).

Returns

Iterable[Collection]

an iterable of Collection objects.

List collections on the Huggingface Hub, given some filters.

When listing collections, the item list per collection is truncated to 4 items maximum. To retrieve all items from a collection, you must use get_collection().

list_datasets

< >

( filter: Union[DatasetFilter, str, Iterable[str], None] = None author: Optional[str] = None benchmark: Optional[Union[str, List[str]]] = None dataset_name: Optional[str] = None language_creators: Optional[Union[str, List[str]]] = None language: Optional[Union[str, List[str]]] = None multilinguality: Optional[Union[str, List[str]]] = None size_categories: Optional[Union[str, List[str]]] = None task_categories: Optional[Union[str, List[str]]] = None task_ids: Optional[Union[str, List[str]]] = None search: Optional[str] = None sort: Optional[Union[Literal['last_modified'], str]] = None direction: Optional[Literal[-1]] = None limit: Optional[int] = None full: Optional[bool] = None token: Optional[str] = None ) Iterable[DatasetInfo]

Parameters

  • filter (DatasetFilter or str or Iterable, optional) — A string or DatasetFilter which can be used to identify datasets on the hub.
  • author (str, optional) — A string which identify the author of the returned datasets.
  • benchmark (str or List, optional) — A string or list of strings that can be used to identify datasets on the Hub by their official benchmark.
  • dataset_name (str, optional) — A string or list of strings that can be used to identify datasets on the Hub by its name, such as SQAC or wikineural
  • language_creators (str or List, optional) — A string or list of strings that can be used to identify datasets on the Hub with how the data was curated, such as crowdsourced or machine_generated.
  • language (str or List, optional) — A string or list of strings representing a two-character language to filter datasets by on the Hub.
  • multilinguality (str or List, optional) — A string or list of strings representing a filter for datasets that contain multiple languages.
  • size_categories (str or List, optional) — A string or list of strings that can be used to identify datasets on the Hub by the size of the dataset such as 100K<n<1M or 1M<n<10M.
  • task_categories (str or List, optional) — A string or list of strings that can be used to identify datasets on the Hub by the designed task, such as audio_classification or named_entity_recognition.
  • task_ids (str or List, optional) — A string or list of strings that can be used to identify datasets on the Hub by the specific task such as speech_emotion_recognition or paraphrase.
  • search (str, optional) — A string that will be contained in the returned datasets.
  • sort (Literal["last_modified"] or str, optional) — The key with which to sort the resulting datasets. Possible values are the properties of the huggingface_hub.hf_api.DatasetInfo class.
  • direction (Literal[-1] or int, optional) — Direction in which to sort. The value -1 sorts by descending order while all other values sort by ascending order.
  • limit (int, optional) — The limit on the number of datasets fetched. Leaving this option to None fetches all datasets.
  • full (bool, optional) — Whether to fetch all dataset data, including the last_modified, the card_data and the files. Can contain useful information such as the PapersWithCode ID.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.

Returns

Iterable[DatasetInfo]

an iterable of huggingface_hub.hf_api.DatasetInfo objects.

List datasets hosted on the Huggingface Hub, given some filters.

Example usage with the filter argument:

>>> from huggingface_hub import HfApi

>>> api = HfApi()

>>> # List all datasets
>>> api.list_datasets()


>>> # List only the text classification datasets
>>> api.list_datasets(filter="task_categories:text-classification")


>>> # List only the datasets in russian for language modeling
>>> api.list_datasets(
...     filter=("language:ru", "task_ids:language-modeling")
... )

>>> api.list_datasets(filter=filt)

Example usage with the search argument:

>>> from huggingface_hub import HfApi

>>> api = HfApi()

>>> # List all datasets with "text" in their name
>>> api.list_datasets(search="text")

>>> # List all datasets with "text" in their name made by google
>>> api.list_datasets(search="text", author="google")

list_inference_endpoints

< >

( namespace: Optional[str] = None token: Optional[str] = None ) ListInferenceEndpoint

Parameters

  • namespace (str, optional) — The namespace to list endpoints for. Defaults to the current user. Set to "*" to list all endpoints from all namespaces (i.e. personal namespace and all orgs the user belongs to).
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token).

Returns

ListInferenceEndpoint

A list of all inference endpoints for the given namespace.

Lists all inference endpoints for the given namespace.

Example:

>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.list_inference_endpoints()
[InferenceEndpoint(name='my-endpoint', ...), ...]

list_liked_repos

< >

( user: Optional[str] = None token: Optional[str] = None ) UserLikes

Parameters

  • user (str, optional) — Name of the user for which you want to fetch the likes.
  • token (str, optional) — A valid authentication token (see https://huggingface.co/settings/token). Used only if user is not passed to implicitly determine the current user name.

Returns

UserLikes

object containing the user name and 3 lists of repo ids (1 for models, 1 for datasets and 1 for Spaces).

Raises

ValueError

  • ValueError — If user is not passed and no token found (either from argument or from machine).

List all public repos liked by a user on huggingface.co.

This list is public so token is optional. If user is not passed, it defaults to the logged in user.

See also like() and unlike().

Example:

>>> from huggingface_hub import list_liked_repos

>>> likes = list_liked_repos("julien-c")

>>> likes.user
"julien-c"

>>> likes.models
["osanseviero/streamlit_1.15", "Xhaheen/ChatGPT_HF", ...]

list_metrics

< >

( ) List[MetricInfo]

Returns

List[MetricInfo]

a list of MetricInfo objects which.

Get the public list of all the metrics on huggingface.co

list_models

< >

( filter: Union[ModelFilter, str, Iterable[str], None] = None author: Optional[str] = None library: Optional[Union[str, List[str]]] = None language: Optional[Union[str, List[str]]] = None model_name: Optional[str] = None task: Optional[Union[str, List[str]]] = None trained_dataset: Optional[Union[str, List[str]]] = None tags: Optional[Union[str, List[str]]] = None search: Optional[str] = None emissions_thresholds: Optional[Tuple[float, float]] = None sort: Union[Literal['last_modified'], str, None] = None direction: Optional[Literal[-1]] = None limit: Optional[int] = None full: Optional[bool] = None cardData: bool = False fetch_config: bool = False token: Optional[Union[bool, str]] = None pipeline_tag: Optional[str] = None ) Iterable[ModelInfo]

Parameters

  • filter (ModelFilter or str or Iterable, optional) — A string or ModelFilter which can be used to identify models on the Hub.
  • author (str, optional) — A string which identify the author (user or organization) of the returned models
  • library (str or List, optional) — A string or list of strings of foundational libraries models were originally trained from, such as pytorch, tensorflow, or allennlp.
  • language (str or List, optional) — A string or list of strings of languages, both by name and country code, such as “en” or “English”
  • model_name (str, optional) — A string that contain complete or partial names for models on the Hub, such as “bert” or “bert-base-cased”
  • task (str or List, optional) — A string or list of strings of tasks models were designed for, such as: “fill-mask” or “automatic-speech-recognition”
  • trained_dataset (str or List, optional) — A string tag or a list of string tags of the trained dataset for a model on the Hub.
  • tags (str or List, optional) — A string tag or a list of tags to filter models on the Hub by, such as text-generation or spacy.
  • search (str, optional) — A string that will be contained in the returned model ids.
  • emissions_thresholds (Tuple, optional) — A tuple of two ints or floats representing a minimum and maximum carbon footprint to filter the resulting models with in grams.
  • sort (Literal["last_modified"] or str, optional) — The key with which to sort the resulting models. Possible values are the properties of the huggingface_hub.hf_api.ModelInfo class.
  • direction (Literal[-1] or int, optional) — Direction in which to sort. The value -1 sorts by descending order while all other values sort by ascending order.
  • limit (int, optional) — The limit on the number of models fetched. Leaving this option to None fetches all models.
  • full (bool, optional) — Whether to fetch all model data, including the last_modified, the sha, the files and the tags. This is set to True by default when using a filter.
  • cardData (bool, optional) — Whether to grab the metadata for the model as well. Can contain useful information such as carbon emissions, metrics, and datasets trained on.
  • fetch_config (bool, optional) — Whether to fetch the model configs as well. This is not included in full due to its size.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.
  • pipeline_tag (str, optional) — A string pipeline tag to filter models on the Hub by, such as summarization

Returns

Iterable[ModelInfo]

an iterable of huggingface_hub.hf_api.ModelInfo objects.

List models hosted on the Huggingface Hub, given some filters.

Example usage with the filter argument:

>>> from huggingface_hub import HfApi

>>> api = HfApi()

>>> # List all models
>>> api.list_models()

>>> # List only the text classification models
>>> api.list_models(filter="text-classification")

>>> # List only models from the AllenNLP library
>>> api.list_models(filter="allennlp")

Example usage with the search argument:

>>> from huggingface_hub import HfApi

>>> api = HfApi()

>>> # List all models with "bert" in their name
>>> api.list_models(search="bert")

>>> # List all models with "bert" in their name made by google
>>> api.list_models(search="bert", author="google")

list_organization_members

< >

( organization: str ) Iterable[User]

Parameters

  • organization (str) — Name of the organization to get the members of.

Returns

Iterable[User]

A list of User objects with the members of the organization.

Raises

HTTPError

  • HTTPError — HTTP 404 If the organization does not exist on the Hub.

List of members of an organization on the Hub.

list_pending_access_requests

< >

( repo_id: str repo_type: Optional[str] = None token: Optional[str] = None ) List[AccessRequest]

Parameters

  • repo_id (str) — The id of the repo to get access requests for.
  • repo_type (str, optional) — The type of the repo to get access requests for. Must be one of model, dataset or space. Defaults to model.
  • token (str, optional) — A valid authentication token (see https://huggingface.co/settings/token).

Returns

List[AccessRequest]

A list of AccessRequest objects. Each time contains a username, email, status and timestamp attribute. If the gated repo has a custom form, the fields attribute will be populated with user’s answers.

Raises

HTTPError

  • HTTPError — HTTP 400 if the repo is not gated.
  • HTTPError — HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t have write or admin role in the organization the repo belongs to or if you passed a read token.

Get pending access requests for a given gated repo.

A pending request means the user has requested access to the repo but the request has not been processed yet. If the approval mode is automatic, this list should be empty. Pending requests can be accepted or rejected using accept_access_request() and reject_access_request().

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.

Example:

>>> from huggingface_hub import list_pending_access_requests, accept_access_request

# List pending requests
>>> requests = list_pending_access_requests("meta-llama/Llama-2-7b")
>>> len(requests)
411
>>> requests[0]
[
    AccessRequest(
        username='clem',
        fullname='Clem 🤗',
        email='***',
        timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
        status='pending',
        fields=None,
    ),
    ...
]

# Accept Clem's request
>>> accept_access_request("meta-llama/Llama-2-7b", "clem")

list_rejected_access_requests

< >

( repo_id: str repo_type: Optional[str] = None token: Optional[str] = None ) List[AccessRequest]

Parameters

  • repo_id (str) — The id of the repo to get access requests for.
  • repo_type (str, optional) — The type of the repo to get access requests for. Must be one of model, dataset or space. Defaults to model.
  • token (str, optional) — A valid authentication token (see https://huggingface.co/settings/token).

Returns

List[AccessRequest]

A list of AccessRequest objects. Each time contains a username, email, status and timestamp attribute. If the gated repo has a custom form, the fields attribute will be populated with user’s answers.

Raises

HTTPError

  • HTTPError — HTTP 400 if the repo is not gated.
  • HTTPError — HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t have write or admin role in the organization the repo belongs to or if you passed a read token.

Get rejected access requests for a given gated repo.

A rejected request means the user has requested access to the repo and the request has been explicitly rejected by a repo owner (either you or another user from your organization). The user cannot download any file of the repo. Rejected requests can be accepted or cancelled at any time using accept_access_request() and cancel_access_request(). A cancelled request will go back to the pending list while an accepted request will go to the accepted list.

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.

Example:

>>> from huggingface_hub import list_rejected_access_requests

>>> requests = list_rejected_access_requests("meta-llama/Llama-2-7b")
>>> len(requests)
411
>>> requests[0]
[
    AccessRequest(
        username='clem',
        fullname='Clem 🤗',
        email='***',
        timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
        status='rejected',
        fields=None,
    ),
    ...
]

list_repo_commits

< >

( repo_id: str repo_type: Optional[str] = None token: Optional[Union[bool, str]] = None revision: Optional[str] = None formatted: bool = False ) List[GitCommitInfo]

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • repo_type (str, optional) — Set to "dataset" or "space" if listing commits from a dataset or a Space, None or "model" if listing from a model. Default is None.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.
  • revision (str, optional) — The git revision to commit from. Defaults to the head of the "main" branch.
  • formatted (bool) — Whether to return the HTML-formatted title and description of the commits. Defaults to False.

Returns

List[GitCommitInfo]

list of objects containing information about the commits for a repo on the Hub.

Raises

RepositoryNotFoundError or RevisionNotFoundError

  • RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
  • RevisionNotFoundError — If revision is not found (error 404) on the repo.

Get the list of commits of a given revision for a repo on the Hub.

Commits are sorted by date (last commit first).

Example:

>>> from huggingface_hub import HfApi
>>> api = HfApi()

# Commits are sorted by date (last commit first)
>>> initial_commit = api.list_repo_commits("gpt2")[-1]

# Initial commit is always a system commit containing the `.gitattributes` file.
>>> initial_commit
GitCommitInfo(
    commit_id='9b865efde13a30c13e0a33e536cf3e4a5a9d71d8',
    authors=['system'],
    created_at=datetime.datetime(2019, 2, 18, 10, 36, 15, tzinfo=datetime.timezone.utc),
    title='initial commit',
    message='',
    formatted_title=None,
    formatted_message=None
)

# Create an empty branch by deriving from initial commit
>>> api.create_branch("gpt2", "new_empty_branch", revision=initial_commit.commit_id)

list_repo_files

< >

( repo_id: str revision: Optional[str] = None repo_type: Optional[str] = None token: Optional[Union[bool, str]] = None ) List[str]

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • revision (str, optional) — The revision of the model repository from which to get the information.
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.

Returns

List[str]

the list of files in a given repository.

Get the list of files in a given repo.

list_repo_likers

< >

( repo_id: str repo_type: Optional[str] = None token: Optional[str] = None ) List[User]

Parameters

  • repo_id (str) — The repository to retrieve . Example: "user/my-cool-model".
  • token (str, optional) — Authentication token. Will default to the stored token.
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.

Returns

List[User]

a list of User objects.

List all users who liked a given repo on the hugging Face Hub.

See also like() and list_liked_repos().

list_repo_refs

< >

( repo_id: str repo_type: Optional[str] = None include_pull_requests: bool = False token: Optional[Union[bool, str]] = None ) GitRefs

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • repo_type (str, optional) — Set to "dataset" or "space" if listing refs from a dataset or a Space, None or "model" if listing from a model. Default is None.
  • include_pull_requests (bool, optional) — Whether to include refs from pull requests in the list. Defaults to False.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.

Returns

GitRefs

object containing all information about branches and tags for a repo on the Hub.

Get the list of refs of a given repo (both tags and branches).

Example:

>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.list_repo_refs("gpt2")
GitRefs(branches=[GitRefInfo(name='main', ref='refs/heads/main', target_commit='e7da7f221d5bf496a48136c0cd264e630fe9fcc8')], converts=[], tags=[])

>>> api.list_repo_refs("bigcode/the-stack", repo_type='dataset')
GitRefs(
    branches=[
        GitRefInfo(name='main', ref='refs/heads/main', target_commit='18edc1591d9ce72aa82f56c4431b3c969b210ae3'),
        GitRefInfo(name='v1.1.a1', ref='refs/heads/v1.1.a1', target_commit='f9826b862d1567f3822d3d25649b0d6d22ace714')
    ],
    converts=[],
    tags=[
        GitRefInfo(name='v1.0', ref='refs/tags/v1.0', target_commit='c37a8cd1e382064d8aced5e05543c5f7753834da')
    ]
)

list_repo_tree

< >

( repo_id: str path_in_repo: Optional[str] = None recursive: bool = False expand: bool = False revision: Optional[str] = None repo_type: Optional[str] = None token: Optional[Union[bool, str]] = None ) Iterable[Union[RepoFile, RepoFolder]]

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • path_in_repo (str, optional) — Relative path of the tree (folder) in the repo, for example: "checkpoints/1fec34a/results". Will default to the root tree (folder) of the repository.
  • recursive (bool, optional, defaults to False) — Whether to list tree’s files and folders recursively.
  • expand (bool, optional, defaults to False) — Whether to fetch more information about the tree’s files and folders (e.g. last commit and files’ security scan results). This operation is more expensive for the server so only 50 results are returned per page (instead of 1000). As pagination is implemented in huggingface_hub, this is transparent for you except for the time it takes to get the results.
  • revision (str, optional) — The revision of the repository from which to get the tree. Defaults to "main" branch.
  • repo_type (str, optional) — The type of the repository from which to get the tree ("model", "dataset" or "space". Defaults to "model".
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.

Returns

Iterable[Union[RepoFile, RepoFolder]]

The information about the tree’s files and folders, as an iterable of RepoFile and RepoFolder objects. The order of the files and folders is not guaranteed.

Raises

RepositoryNotFoundError or RevisionNotFoundError or EntryNotFoundError

List a repo tree’s files and folders and get information about them.

Examples:

Get information about a repo’s tree.

>>> from huggingface_hub import list_repo_tree
>>> repo_tree = list_repo_tree("lysandre/arxiv-nlp")
>>> repo_tree
<generator object HfApi.list_repo_tree at 0x7fa4088e1ac0>
>>> list(repo_tree)
[
    RepoFile(path='.gitattributes', size=391, blob_id='ae8c63daedbd4206d7d40126955d4e6ab1c80f8f', lfs=None, last_commit=None, security=None),
    RepoFile(path='README.md', size=391, blob_id='43bd404b159de6fba7c2f4d3264347668d43af25', lfs=None, last_commit=None, security=None),
    RepoFile(path='config.json', size=554, blob_id='2f9618c3a19b9a61add74f70bfb121335aeef666', lfs=None, last_commit=None, security=None),
    RepoFile(
        path='flax_model.msgpack', size=497764107, blob_id='8095a62ccb4d806da7666fcda07467e2d150218e',
        lfs={'size': 497764107, 'sha256': 'd88b0d6a6ff9c3f8151f9d3228f57092aaea997f09af009eefd7373a77b5abb9', 'pointer_size': 134}, last_commit=None, security=None
    ),
    RepoFile(path='merges.txt', size=456318, blob_id='226b0752cac7789c48f0cb3ec53eda48b7be36cc', lfs=None, last_commit=None, security=None),
    RepoFile(
        path='pytorch_model.bin', size=548123560, blob_id='64eaa9c526867e404b68f2c5d66fd78e27026523',
        lfs={'size': 548123560, 'sha256': '9be78edb5b928eba33aa88f431551348f7466ba9f5ef3daf1d552398722a5436', 'pointer_size': 134}, last_commit=None, security=None
    ),
    RepoFile(path='vocab.json', size=898669, blob_id='b00361fece0387ca34b4b8b8539ed830d644dbeb', lfs=None, last_commit=None, security=None)]
]

Get even more information about a repo’s tree (last commit and files’ security scan results)

>>> from huggingface_hub import list_repo_tree
>>> repo_tree = list_repo_tree("prompthero/openjourney-v4", expand=True)
>>> list(repo_tree)
[
    RepoFolder(
        path='feature_extractor',
        tree_id='aa536c4ea18073388b5b0bc791057a7296a00398',
        last_commit={
            'oid': '47b62b20b20e06b9de610e840282b7e6c3d51190',
            'title': 'Upload diffusers weights (#48)',
            'date': datetime.datetime(2023, 3, 21, 9, 5, 27, tzinfo=datetime.timezone.utc)
        }
    ),
    RepoFolder(
        path='safety_checker',
        tree_id='65aef9d787e5557373fdf714d6c34d4fcdd70440',
        last_commit={
            'oid': '47b62b20b20e06b9de610e840282b7e6c3d51190',
            'title': 'Upload diffusers weights (#48)',
            'date': datetime.datetime(2023, 3, 21, 9, 5, 27, tzinfo=datetime.timezone.utc)
        }
    ),
    RepoFile(
        path='model_index.json',
        size=582,
        blob_id='d3d7c1e8c3e78eeb1640b8e2041ee256e24c9ee1',
        lfs=None,
        last_commit={
            'oid': 'b195ed2d503f3eb29637050a886d77bd81d35f0e',
            'title': 'Fix deprecation warning by changing `CLIPFeatureExtractor` to `CLIPImageProcessor`. (#54)',
            'date': datetime.datetime(2023, 5, 15, 21, 41, 59, tzinfo=datetime.timezone.utc)
        },
        security={
            'safe': True,
            'av_scan': {'virusFound': False, 'virusNames': None},
            'pickle_import_scan': None
        }
    )
    ...
]

list_spaces

< >

( filter: Union[str, Iterable[str], None] = None author: Optional[str] = None search: Optional[str] = None sort: Union[Literal['last_modified'], str, None] = None direction: Optional[Literal[-1]] = None limit: Optional[int] = None datasets: Union[str, Iterable[str], None] = None models: Union[str, Iterable[str], None] = None linked: bool = False full: Optional[bool] = None token: Optional[str] = None ) Iterable[SpaceInfo]

Parameters

  • filter (str or Iterable, optional) — A string tag or list of tags that can be used to identify Spaces on the Hub.
  • author (str, optional) — A string which identify the author of the returned Spaces.
  • search (str, optional) — A string that will be contained in the returned Spaces.
  • sort (Literal["last_modified"] or str, optional) — The key with which to sort the resulting Spaces. Possible values are the properties of the huggingface_hub.hf_api.SpaceInfo` class.
  • direction (Literal[-1] or int, optional) — Direction in which to sort. The value -1 sorts by descending order while all other values sort by ascending order.
  • limit (int, optional) — The limit on the number of Spaces fetched. Leaving this option to None fetches all Spaces.
  • datasets (str or Iterable, optional) — Whether to return Spaces that make use of a dataset. The name of a specific dataset can be passed as a string.
  • models (str or Iterable, optional) — Whether to return Spaces that make use of a model. The name of a specific model can be passed as a string.
  • linked (bool, optional) — Whether to return Spaces that make use of either a model or a dataset.
  • full (bool, optional) — Whether to fetch all Spaces data, including the last_modified, siblings and card_data fields.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.

Returns

Iterable[SpaceInfo]

an iterable of huggingface_hub.hf_api.SpaceInfo objects.

List spaces hosted on the Huggingface Hub, given some filters.

list_user_followers

< >

( username: str ) Iterable[User]

Parameters

  • username (str) — Username of the user to get the followers of.

Returns

Iterable[User]

A list of User objects with the followers of the user.

Raises

HTTPError

  • HTTPError — HTTP 404 If the user does not exist on the Hub.

Get the list of followers of a user on the Hub.

list_user_following

< >

( username: str ) Iterable[User]

Parameters

  • username (str) — Username of the user to get the users followed by.

Returns

Iterable[User]

A list of User objects with the users followed by the user.

Raises

HTTPError

  • HTTPError — HTTP 404 If the user does not exist on the Hub.

Get the list of users followed by a user on the Hub.

merge_pull_request

< >

( repo_id: str discussion_num: int token: Optional[str] = None comment: Optional[str] = None repo_type: Optional[str] = None ) DiscussionStatusChange

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • discussion_num (int) — The number of the Discussion or Pull Request . Must be a strictly positive integer.
  • comment (str, optional) — An optional comment to post with the status change.
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token)

Returns

DiscussionStatusChange

the status change event

Merges a Pull Request.

Raises the following errors:

  • HTTPError if the HuggingFace API returned an error
  • ValueError if some parameter value is invalid
  • RepositoryNotFoundError If the repository to download from cannot be found. This may be because it doesn’t exist, or because it is set to private and you do not have access.

model_info

< >

( repo_id: str revision: Optional[str] = None timeout: Optional[float] = None securityStatus: Optional[bool] = None files_metadata: bool = False token: Optional[Union[bool, str]] = None ) huggingface_hub.hf_api.ModelInfo

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • revision (str, optional) — The revision of the model repository from which to get the information.
  • timeout (float, optional) — Whether to set a timeout for the request to the Hub.
  • securityStatus (bool, optional) — Whether to retrieve the security status from the model repository as well.
  • files_metadata (bool, optional) — Whether or not to retrieve metadata for files in the repository (size, LFS metadata, etc). Defaults to False.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.

Returns

huggingface_hub.hf_api.ModelInfo

The model repository information.

Get info on one specific model on huggingface.co

Model can be private if you pass an acceptable token or are logged in.

Raises the following errors:

  • RepositoryNotFoundError If the repository to download from cannot be found. This may be because it doesn’t exist, or because it is set to private and you do not have access.
  • RevisionNotFoundError If the revision to download from cannot be found.

move_repo

< >

( from_id: str to_id: str repo_type: Optional[str] = None token: Optional[str] = None )

Parameters

  • from_id (str) — A namespace (user or an organization) and a repo name separated by a /. Original repository identifier.
  • to_id (str) — A namespace (user or an organization) and a repo name separated by a /. Final repository identifier.
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token)

Moving a repository from namespace1/repo_name1 to namespace2/repo_name2

Note there are certain limitations. For more information about moving repositories, please see https://hf.co/docs/hub/repositories-settings#renaming-or-transferring-a-repo.

Raises the following errors:

  • RepositoryNotFoundError If the repository to download from cannot be found. This may be because it doesn’t exist, or because it is set to private and you do not have access.

parse_safetensors_file_metadata

< >

( repo_id: str filename: str repo_type: Optional[str] = None revision: Optional[str] = None token: Optional[str] = None ) SafetensorsFileMetadata

Parameters

  • repo_id (str) — A user or an organization name and a repo name separated by a /.
  • filename (str) — The name of the file in the repo.
  • repo_type (str, optional) — Set to "dataset" or "space" if the file is in a dataset or space, None or "model" if in a model. Default is None.
  • revision (str, optional) — The git revision to fetch the file from. Can be a branch name, a tag, or a commit hash. Defaults to the head of the "main" branch.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.

Returns

SafetensorsFileMetadata

information related to a safetensors file.

Raises

    • NotASafetensorsRepoError: if the repo is not a safetensors repo i.e. doesn’t have either a model.safetensors or a model.safetensors.index.json file.
    • SafetensorsParsingError: if a safetensors file header couldn’t be parsed correctly.

Parse metadata from a safetensors file on the Hub.

To parse metadata from all safetensors files in a repo at once, use get_safetensors_metadata().

For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.

pause_inference_endpoint

< >

( name: str namespace: Optional[str] = None token: Optional[str] = None ) InferenceEndpoint

Parameters

  • name (str) — The name of the Inference Endpoint to pause.
  • namespace (str, optional) — The namespace in which the Inference Endpoint is located. Defaults to the current user.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token).

Returns

InferenceEndpoint

information about the paused Inference Endpoint.

Pause an Inference Endpoint.

A paused Inference Endpoint will not be charged. It can be resumed at any time using resume_inference_endpoint(). This is different than scaling the Inference Endpoint to zero with scale_to_zero_inference_endpoint(), which would be automatically restarted when a request is made to it.

For convenience, you can also pause an Inference Endpoint using pause_inference_endpoint().

pause_space

< >

( repo_id: str token: Optional[str] = None ) SpaceRuntime

Parameters

  • repo_id (str) — ID of the Space to pause. Example: "Salesforce/BLIP2".
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Returns

SpaceRuntime

Runtime information about your Space including stage=PAUSED and requested hardware.

Raises

RepositoryNotFoundError or HfHubHTTPError or BadRequestError

  • RepositoryNotFoundError — If your Space is not found (error 404). Most probably wrong repo_id or your space is private but you are not authenticated.
  • HfHubHTTPError — 403 Forbidden: only the owner of a Space can pause it. If you want to manage a Space that you don’t own, either ask the owner by opening a Discussion or duplicate the Space.
  • BadRequestError — If your Space is a static Space. Static Spaces are always running and never billed. If you want to hide a static Space, you can set it to private.

Pause your Space.

A paused Space stops executing until manually restarted by its owner. This is different from the sleeping state in which free Spaces go after 48h of inactivity. Paused time is not billed to your account, no matter the hardware you’ve selected. To restart your Space, use restart_space() and go to your Space settings page.

For more details, please visit the docs.

preupload_lfs_files

< >

( repo_id: str additions: Iterable[CommitOperationAdd] token: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None create_pr: Optional[bool] = None num_threads: int = 5 free_memory: bool = True gitignore_content: Optional[str] = None )

Parameters

  • repo_id (str) — The repository in which you will commit the files, for example: "username/custom_transformers".
  • operations (Iterable of CommitOperationAdd) — The list of files to upload. Warning: the objects in this list will be mutated to include information relative to the upload. Do not reuse the same objects for multiple commits.
  • token (str, optional) — Authentication token. Will default to the stored token.
  • repo_type (str, optional) — The type of repository to upload to (e.g. "model" -default-, "dataset" or "space").
  • revision (str, optional) — The git revision to commit from. Defaults to the head of the "main" branch.
  • create_pr (boolean, optional) — Whether or not you plan to create a Pull Request with that commit. Defaults to False.
  • num_threads (int, optional) — Number of concurrent threads for uploading files. Defaults to 5. Setting it to 2 means at most 2 files will be uploaded concurrently.
  • gitignore_content (str, optional) — The content of the .gitignore file to know which files should be ignored. The order of priority is to first check if gitignore_content is passed, then check if the .gitignore file is present in the list of files to commit and finally default to the .gitignore file already hosted on the Hub (if any).

Pre-upload LFS files to S3 in preparation on a future commit.

This method is useful if you are generating the files to upload on-the-fly and you don’t want to store them in memory before uploading them all at once.

This is a power-user method. You shouldn’t need to call it directly to make a normal commit. Use create_commit() directly instead.

Commit operations will be mutated during the process. In particular, the attached path_or_fileobj will be removed after the upload to save memory (and replaced by an empty bytes object). Do not reuse the same objects except to pass them to create_commit(). If you don’t want to remove the attached content from the commit operation object, pass free_memory=False.

Example:

>>> from huggingface_hub import CommitOperationAdd, preupload_lfs_files, create_commit, create_repo

>>> repo_id = create_repo("test_preupload").repo_id

# Generate and preupload LFS files one by one
>>> operations = [] # List of all `CommitOperationAdd` objects that will be generated
>>> for i in range(5):
...     content = ... # generate binary content
...     addition = CommitOperationAdd(path_in_repo=f"shard_{i}_of_5.bin", path_or_fileobj=content)
...     preupload_lfs_files(repo_id, additions=[addition]) # upload + free memory
...     operations.append(addition)

# Create commit
>>> create_commit(repo_id, operations=operations, commit_message="Commit all shards")

reject_access_request

< >

( repo_id: str user: str repo_type: Optional[str] = None token: Optional[str] = None )

Parameters

  • repo_id (str) — The id of the repo to reject access request for.
  • user (str) — The username of the user which access request should be rejected.
  • repo_type (str, optional) — The type of the repo to reject access request for. Must be one of model, dataset or space. Defaults to model.
  • token (str, optional) — A valid authentication token (see https://huggingface.co/settings/token).

Raises

HTTPError

  • HTTPError — HTTP 400 if the repo is not gated.
  • HTTPError — HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t have write or admin role in the organization the repo belongs to or if you passed a read token.
  • HTTPError — HTTP 404 if the user does not exist on the Hub.
  • HTTPError — HTTP 404 if the user access request cannot be found.
  • HTTPError — HTTP 404 if the user access request is already in the rejected list.

Reject an access request from a user for a given gated repo.

A rejected request will go to the rejected list. The user cannot download any file of the repo. Rejected requests can be accepted or cancelled at any time using accept_access_request() and cancel_access_request(). A cancelled request will go back to the pending list while an accepted request will go to the accepted list.

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.

rename_discussion

< >

( repo_id: str discussion_num: int new_title: str token: Optional[str] = None repo_type: Optional[str] = None ) DiscussionTitleChange

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • discussion_num (int) — The number of the Discussion or Pull Request . Must be a strictly positive integer.
  • new_title (str) — The new title for the discussion
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token)

Returns

DiscussionTitleChange

the title change event

Renames a Discussion.

Examples:

>>> new_title = "New title, fixing a typo"
>>> HfApi().rename_discussion(
...     repo_id="username/repo_name",
...     discussion_num=34
...     new_title=new_title
... )
# DiscussionTitleChange(id='deadbeef0000000', type='title-change', ...)

Raises the following errors:

  • HTTPError if the HuggingFace API returned an error
  • ValueError if some parameter value is invalid
  • RepositoryNotFoundError If the repository to download from cannot be found. This may be because it doesn’t exist, or because it is set to private and you do not have access.

repo_exists

< >

( repo_id: str repo_type: Optional[str] = None token: Optional[str] = None )

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • repo_type (str, optional) — Set to "dataset" or "space" if getting repository info from a dataset or a space, None or "model" if getting repository info from a model. Default is None.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.

Checks if a repository exists on the Hugging Face Hub.

Examples:

>>> from huggingface_hub import repo_exists
>>> repo_exists("google/gemma-7b")
True
>>> repo_exists("google/not-a-repo")
False

repo_info

< >

( repo_id: str revision: Optional[str] = None repo_type: Optional[str] = None timeout: Optional[float] = None files_metadata: bool = False token: Optional[Union[bool, str]] = None ) Union[SpaceInfo, DatasetInfo, ModelInfo]

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • revision (str, optional) — The revision of the repository from which to get the information.
  • repo_type (str, optional) — Set to "dataset" or "space" if getting repository info from a dataset or a space, None or "model" if getting repository info from a model. Default is None.
  • timeout (float, optional) — Whether to set a timeout for the request to the Hub.
  • files_metadata (bool, optional) — Whether or not to retrieve metadata for files in the repository (size, LFS metadata, etc). Defaults to False.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.

Returns

Union[SpaceInfo, DatasetInfo, ModelInfo]

The repository information, as a huggingface_hub.hf_api.DatasetInfo, huggingface_hub.hf_api.ModelInfo or huggingface_hub.hf_api.SpaceInfo object.

Get the info object for a given repo of a given type.

Raises the following errors:

  • RepositoryNotFoundError If the repository to download from cannot be found. This may be because it doesn’t exist, or because it is set to private and you do not have access.
  • RevisionNotFoundError If the revision to download from cannot be found.

request_space_hardware

< >

( repo_id: str hardware: SpaceHardware token: Optional[str] = None sleep_time: Optional[int] = None ) SpaceRuntime

Parameters

  • repo_id (str) — ID of the repo to update. Example: "bigcode/in-the-stack".
  • hardware (str or SpaceHardware) — Hardware on which to run the Space. Example: "t4-medium".
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.
  • sleep_time (int, optional) — Number of seconds of inactivity to wait before a Space is put to sleep. Set to -1 if you don’t want your Space to sleep (default behavior for upgraded hardware). For free hardware, you can’t configure the sleep time (value is fixed to 48 hours of inactivity). See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details.

Returns

SpaceRuntime

Runtime information about a Space including Space stage and hardware.

Request new hardware for a Space.

It is also possible to request hardware directly when creating the Space repo! See create_repo() for details.

request_space_storage

< >

( repo_id: str storage: SpaceStorage token: Optional[str] = None ) SpaceRuntime

Parameters

  • repo_id (str) — ID of the Space to update. Example: "HuggingFaceH4/open_llm_leaderboard".
  • storage (str or SpaceStorage) — Storage tier. Either ‘small’, ‘medium’, or ‘large’.
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Returns

SpaceRuntime

Runtime information about a Space including Space stage and hardware.

Request persistent storage for a Space.

It is not possible to decrease persistent storage after its granted. To do so, you must delete it via delete_space_storage().

restart_space

< >

( repo_id: str token: Optional[str] = None factory_reboot: bool = False ) SpaceRuntime

Parameters

  • repo_id (str) — ID of the Space to restart. Example: "Salesforce/BLIP2".
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.
  • factory_reboot (bool, optional) — If True, the Space will be rebuilt from scratch without caching any requirements.

Returns

SpaceRuntime

Runtime information about your Space.

Raises

RepositoryNotFoundError or HfHubHTTPError or BadRequestError

  • RepositoryNotFoundError — If your Space is not found (error 404). Most probably wrong repo_id or your space is private but you are not authenticated.
  • HfHubHTTPError — 403 Forbidden: only the owner of a Space can restart it. If you want to restart a Space that you don’t own, either ask the owner by opening a Discussion or duplicate the Space.
  • BadRequestError — If your Space is a static Space. Static Spaces are always running and never billed. If you want to hide a static Space, you can set it to private.

Restart your Space.

This is the only way to programmatically restart a Space if you’ve put it on Pause (see pause_space()). You must be the owner of the Space to restart it. If you are using an upgraded hardware, your account will be billed as soon as the Space is restarted. You can trigger a restart no matter the current state of a Space.

For more details, please visit the docs.

resume_inference_endpoint

< >

( name: str namespace: Optional[str] = None token: Optional[str] = None ) InferenceEndpoint

Parameters

  • name (str) — The name of the Inference Endpoint to resume.
  • namespace (str, optional) — The namespace in which the Inference Endpoint is located. Defaults to the current user.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token).

Returns

InferenceEndpoint

information about the resumed Inference Endpoint.

Resume an Inference Endpoint.

For convenience, you can also resume an Inference Endpoint using InferenceEndpoint.resume().

revision_exists

< >

( repo_id: str revision: str repo_type: Optional[str] = None token: Optional[str] = None )

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • revision (str) — The revision of the repository to check.
  • repo_type (str, optional) — Set to "dataset" or "space" if getting repository info from a dataset or a space, None or "model" if getting repository info from a model. Default is None.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.

Checks if a specific revision exists on a repo on the Hugging Face Hub.

Examples:

>>> from huggingface_hub import revision_exists
>>> revision_exists("google/gemma-7b", "float16")
True
>>> revision_exists("google/gemma-7b", "not-a-revision")
False

run_as_future

< >

( fn: Callable[..., R] *args **kwargs ) Future

Parameters

  • fn (Callable) — The method to run in the background.
  • *args, **kwargs — Arguments with which the method will be called.

Returns

Future

a Future instance to get the result of the task.

Run a method in the background and return a Future instance.

The main goal is to run methods without blocking the main thread (e.g. to push data during a training). Background jobs are queued to preserve order but are not ran in parallel. If you need to speed-up your scripts by parallelizing lots of call to the API, you must setup and use your own ThreadPoolExecutor.

Note: Most-used methods like upload_file(), upload_folder() and create_commit() have a run_as_future: bool argument to directly call them in the background. This is equivalent to calling api.run_as_future(...) on them but less verbose.

Example:

>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> future = api.run_as_future(api.whoami) # instant
>>> future.done()
False
>>> future.result() # wait until complete and return result
(...)
>>> future.done()
True

scale_to_zero_inference_endpoint

< >

( name: str namespace: Optional[str] = None token: Optional[str] = None ) InferenceEndpoint

Parameters

  • name (str) — The name of the Inference Endpoint to scale to zero.
  • namespace (str, optional) — The namespace in which the Inference Endpoint is located. Defaults to the current user.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token).

Returns

InferenceEndpoint

information about the scaled-to-zero Inference Endpoint.

Scale Inference Endpoint to zero.

An Inference Endpoint scaled to zero will not be charged. It will be resume on the next request to it, with a cold start delay. This is different than pausing the Inference Endpoint with pause_inference_endpoint(), which would require a manual resume with resume_inference_endpoint().

For convenience, you can also scale an Inference Endpoint to zero using InferenceEndpoint.scale_to_zero().

set_space_sleep_time

< >

( repo_id: str sleep_time: int token: Optional[str] = None ) SpaceRuntime

Parameters

  • repo_id (str) — ID of the repo to update. Example: "bigcode/in-the-stack".
  • sleep_time (int, optional) — Number of seconds of inactivity to wait before a Space is put to sleep. Set to -1 if you don’t want your Space to pause (default behavior for upgraded hardware). For free hardware, you can’t configure the sleep time (value is fixed to 48 hours of inactivity). See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details.
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Returns

SpaceRuntime

Runtime information about a Space including Space stage and hardware.

Set a custom sleep time for a Space running on upgraded hardware..

Your Space will go to sleep after X seconds of inactivity. You are not billed when your Space is in “sleep” mode. If a new visitor lands on your Space, it will “wake it up”. Only upgraded hardware can have a configurable sleep time. To know more about the sleep stage, please refer to https://huggingface.co/docs/hub/spaces-gpus#sleep-time.

It is also possible to set a custom sleep time when requesting hardware with request_space_hardware().

snapshot_download

< >

( repo_id: str repo_type: Optional[str] = None revision: Optional[str] = None cache_dir: Union[str, Path, None] = None local_dir: Union[str, Path, None] = None local_dir_use_symlinks: Union[bool, Literal['auto']] = 'auto' proxies: Optional[Dict] = None etag_timeout: float = 10 resume_download: bool = False force_download: bool = False token: Optional[Union[str, bool]] = None local_files_only: bool = False allow_patterns: Optional[Union[List[str], str]] = None ignore_patterns: Optional[Union[List[str], str]] = None max_workers: int = 8 tqdm_class: Optional[base_tqdm] = None )

Parameters

  • repo_id (str) — A user or an organization name and a repo name separated by a /.
  • repo_type (str, optional) — Set to "dataset" or "space" if downloading from a dataset or space, None or "model" if downloading from a model. Default is None.
  • revision (str, optional) — An optional Git revision id which can be a branch name, a tag, or a commit hash.
  • cache_dir (str, Path, optional) — Path to the folder where cached files are stored.
  • local_dir (str or Path, optional) — If provided, the downloaded files will be placed under this directory, either as symlinks (default) or regular files (see description for more details).
  • local_dir_use_symlinks ("auto" or bool, defaults to "auto") — To be used with local_dir. If set to “auto”, the cache directory will be used and the file will be either duplicated or symlinked to the local directory depending on its size. It set to True, a symlink will be created, no matter the file size. If set to False, the file will either be duplicated from cache (if already exists) or downloaded from the Hub and not cached. See description for more details.
  • proxies (dict, optional) — Dictionary mapping protocol to the URL of the proxy passed to requests.request.
  • etag_timeout (float, optional, defaults to 10) — When fetching ETag, how many seconds to wait for the server to send data before giving up which is passed to requests.request.
  • resume_download (bool, optional, defaults to False) -- If True`, resume a previously interrupted download.
  • force_download (bool, optional, defaults to False) — Whether the file should be downloaded even if it already exists in the local cache.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.
  • local_files_only (bool, optional, defaults to False) — If True, avoid downloading the file and return the path to the local cached file if it exists.
  • allow_patterns (List[str] or str, optional) — If provided, only files matching at least one pattern are downloaded.
  • ignore_patterns (List[str] or str, optional) — If provided, files matching any of the patterns are not downloaded.
  • max_workers (int, optional) — Number of concurrent threads to download files (1 thread = 1 file download). Defaults to 8.
  • tqdm_class (tqdm, optional) — If provided, overwrites the default behavior for the progress bar. Passed argument must inherit from tqdm.auto.tqdm or at least mimic its behavior. Note that the tqdm_class is not passed to each individual download. Defaults to the custom HF progress bar that can be disabled by setting HF_HUB_DISABLE_PROGRESS_BARS environment variable.

Download repo files.

Download a whole snapshot of a repo’s files at the specified revision. This is useful when you want all files from a repo, because you don’t know which ones you will need a priori. All files are nested inside a folder in order to keep their actual filename relative to that folder. You can also filter which files to download using allow_patterns and ignore_patterns.

If local_dir is provided, the file structure from the repo will be replicated in this location. You can configure how you want to move those files:

  • If local_dir_use_symlinks="auto" (default), files are downloaded and stored in the cache directory as blob files. Small files (<5MB) are duplicated in local_dir while a symlink is created for bigger files. The goal is to be able to manually edit and save small files without corrupting the cache while saving disk space for binary files. The 5MB threshold can be configured with the HF_HUB_LOCAL_DIR_AUTO_SYMLINK_THRESHOLD environment variable.
  • If local_dir_use_symlinks=True, files are downloaded, stored in the cache directory and symlinked in local_dir. This is optimal in term of disk usage but files must not be manually edited.
  • If local_dir_use_symlinks=False and the blob files exist in the cache directory, they are duplicated in the local dir. This means disk usage is not optimized.
  • Finally, if local_dir_use_symlinks=False and the blob files do not exist in the cache directory, then the files are downloaded and directly placed under local_dir. This means if you need to download them again later, they will be re-downloaded entirely.

An alternative would be to clone the repo but this requires git and git-lfs to be installed and properly configured. It is also not possible to filter which files to download when cloning a repository using git.

Raises the following errors:

space_info

< >

( repo_id: str revision: Optional[str] = None timeout: Optional[float] = None files_metadata: bool = False token: Optional[Union[bool, str]] = None ) SpaceInfo

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • revision (str, optional) — The revision of the space repository from which to get the information.
  • timeout (float, optional) — Whether to set a timeout for the request to the Hub.
  • files_metadata (bool, optional) — Whether or not to retrieve metadata for files in the repository (size, LFS metadata, etc). Defaults to False.
  • token (bool or str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If None or True and machine is logged in (through huggingface-cli login or login()), token will be retrieved from the cache. If False, token is not sent in the request header.

Returns

SpaceInfo

The space repository information.

Get info on one specific Space on huggingface.co.

Space can be private if you pass an acceptable token.

Raises the following errors:

  • RepositoryNotFoundError If the repository to download from cannot be found. This may be because it doesn’t exist, or because it is set to private and you do not have access.
  • RevisionNotFoundError If the revision to download from cannot be found.

super_squash_history

< >

( repo_id: str branch: Optional[str] = None commit_message: Optional[str] = None repo_type: Optional[str] = None token: Optional[str] = None )

Parameters

  • repo_id (str) — A namespace (user or an organization) and a repo name separated by a /.
  • branch (str, optional) — The branch to squash. Defaults to the head of the "main" branch.
  • commit_message (str, optional) — The commit message to use for the squashed commit.
  • repo_type (str, optional) — Set to "dataset" or "space" if listing commits from a dataset or a Space, None or "model" if listing from a model. Default is None.
  • token (str, optional) — A valid authentication token (see https://huggingface.co/settings/token). If the machine is logged in (through huggingface-cli login or login()), token can be automatically retrieved from the cache.

Raises

RepositoryNotFoundError or RevisionNotFoundError or BadRequestError

  • RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
  • RevisionNotFoundError — If the branch to squash cannot be found.
  • BadRequestError — If invalid reference for a branch. You cannot squash history on tags.

Squash commit history on a branch for a repo on the Hub.

Squashing the repo history is useful when you know you’ll make hundreds of commits and you don’t want to clutter the history. Squashing commits can only be performed from the head of a branch.

Once squashed, the commit history cannot be retrieved. This is a non-revertible operation.

Once the history of a branch has been squashed, it is not possible to merge it back into another branch since their history will have diverged.

Example:

>>> from huggingface_hub import HfApi
>>> api = HfApi()

# Create repo
>>> repo_id = api.create_repo("test-squash").repo_id

# Make a lot of commits.
>>> api.upload_file(repo_id=repo_id, path_in_repo="file.txt", path_or_fileobj=b"content")
>>> api.upload_file(repo_id=repo_id, path_in_repo="lfs.bin", path_or_fileobj=b"content")
>>> api.upload_file(repo_id=repo_id, path_in_repo="file.txt", path_or_fileobj=b"another_content")

# Squash history
>>> api.super_squash_history(repo_id=repo_id)

unlike

< >

( repo_id: str token: Optional[str] = None repo_type: Optional[str] = None )

Parameters

  • repo_id (str) — The repository to unlike. Example: "user/my-cool-model".
  • token (str, optional) — Authentication token. Will default to the stored token.
  • repo_type (str, optional) — Set to "dataset" or "space" if unliking a dataset or space, None or "model" if unliking a model. Default is None.

Raises

RepositoryNotFoundError

  • RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.

Unlike a given repo on the Hub (e.g. remove from favorite list).

See also like() and list_liked_repos().

Example:

>>> from huggingface_hub import like, list_liked_repos, unlike
>>> like("gpt2")
>>> "gpt2" in list_liked_repos().models
True
>>> unlike("gpt2")
>>> "gpt2" in list_liked_repos().models
False

update_collection_item

< >

( collection_slug: str item_object_id: str note: Optional[str] = None position: Optional[int] = None token: Optional[str] = None )

Parameters

  • collection_slug (str) — Slug of the collection to update. Example: "TheBloke/recent-models-64f9a55bb3115b4f513ec026".
  • item_object_id (str) — ID of the item in the collection. This is not the id of the item on the Hub (repo_id or paper id). It must be retrieved from a CollectionItem object. Example: collection.items[0].item_object_id.
  • note (str, optional) — A note to attach to the item in the collection. The maximum size for a note is 500 characters.
  • position (int, optional) — New position of the item in the collection.
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Update an item in a collection.

Example:

>>> from huggingface_hub import get_collection, update_collection_item

# Get collection first
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")

# Update item based on its ID (add note + update position)
>>> update_collection_item(
...     collection_slug="TheBloke/recent-models-64f9a55bb3115b4f513ec026",
...     item_object_id=collection.items[-1].item_object_id,
...     note="Newly updated model!"
...     position=0,
... )

update_collection_metadata

< >

( collection_slug: str title: Optional[str] = None description: Optional[str] = None position: Optional[int] = None private: Optional[bool] = None theme: Optional[str] = None token: Optional[str] = None )

Parameters

  • collection_slug (str) — Slug of the collection to update. Example: "TheBloke/recent-models-64f9a55bb3115b4f513ec026".
  • title (str) — Title of the collection to update.
  • description (str, optional) — Description of the collection to update.
  • position (int, optional) — New position of the collection in the list of collections of the user.
  • private (bool, optional) — Whether the collection should be private or not.
  • theme (str, optional) — Theme of the collection on the Hub.
  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Update metadata of a collection on the Hub.

All arguments are optional. Only provided metadata will be updated.

Returns: Collection

Example:

>>> from huggingface_hub import update_collection_metadata
>>> collection = update_collection_metadata(
...     collection_slug="username/iccv-2023-64f9a55bb3115b4f513ec026",
...     title="ICCV Oct. 2023"
...     description="Portfolio of models, datasets, papers and demos I presented at ICCV Oct. 2023",
...     private=False,
...     theme="pink",
... )
>>> collection.slug
"username/iccv-oct-2023-64f9a55bb3115b4f513ec026"
# ^collection slug got updated but not the trailing ID

update_inference_endpoint

< >

( name: str accelerator: Optional[str] = None instance_size: Optional[str] = None instance_type: Optional[str] = None min_replica: Optional[int] = None max_replica: Optional[int] = None repository: Optional[str] = None framework: Optional[str] = None revision: Optional[str] = None task: Optional[str] = None namespace: Optional[str] = None token: Optional[str] = None ) InferenceEndpoint

Parameters

  • name (str) — The name of the Inference Endpoint to update.
  • accelerator (str, optional) — The hardware accelerator to be used for inference (e.g. "cpu").
  • instance_size (str, optional) — The size or type of the instance to be used for hosting the model (e.g. "large").
  • instance_type (str, optional) — The cloud instance type where the Inference Endpoint will be deployed (e.g. "c6i").
  • min_replica (int, optional) — The minimum number of replicas (instances) to keep running for the Inference Endpoint.
  • max_replica (int, optional) — The maximum number of replicas (instances) to scale to for the Inference Endpoint.
  • repository (str, optional) — The name of the model repository associated with the Inference Endpoint (e.g. "gpt2").
  • framework (str, optional) — The machine learning framework used for the model (e.g. "custom").
  • revision (str, optional) — The specific model revision to deploy on the Inference Endpoint (e.g. "6c0e6080953db56375760c0471a8c5f2929baf11").
  • task (str, optional) — The task on which to deploy the model (e.g. "text-classification").
  • namespace (str, optional) — The namespace where the Inference Endpoint will be updated. Defaults to the current user’s namespace.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token).

Returns

InferenceEndpoint

information about the updated Inference Endpoint.

Update an Inference Endpoint.

This method allows the update of either the compute configuration, the deployed model, or both. All arguments are optional but at least one must be provided.

For convenience, you can also update an Inference Endpoint using InferenceEndpoint.update().

update_repo_visibility

< >

( repo_id: str private: bool = False token: Optional[str] = None organization: Optional[str] = None repo_type: Optional[str] = None name: Optional[str] = None )

Parameters

  • repo_id (str, optional) — A namespace (user or an organization) and a repo name separated by a /.
  • private (bool, optional, defaults to False) — Whether the model repo should be private.
  • token (str, optional) — An authentication token (See https://huggingface.co/settings/token)
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.

Update the visibility setting of a repository.

Raises the following errors:

  • RepositoryNotFoundError If the repository to download from cannot be found. This may be because it doesn’t exist, or because it is set to private and you do not have access.

upload_file

< >

( path_or_fileobj: Union[str, Path, bytes, BinaryIO] path_in_repo: str repo_id: str token: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None commit_message: Optional[str] = None commit_description: Optional[str] = None create_pr: Optional[bool] = None parent_commit: Optional[str] = None run_as_future: bool = False ) CommitInfo or Future

Parameters

  • path_or_fileobj (str, Path, bytes, or IO) — Path to a file on the local machine or binary data stream / fileobj / buffer.
  • path_in_repo (str) — Relative filepath in the repo, for example: "checkpoints/1fec34a/weights.bin"
  • repo_id (str) — The repository to which the file will be uploaded, for example: "username/custom_transformers"
  • token (str, optional) — Authentication token, obtained with HfApi.login method. Will default to the stored token.
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.
  • revision (str, optional) — The git revision to commit from. Defaults to the head of the "main" branch.
  • commit_message (str, optional) — The summary / title / first line of the generated commit
  • commit_description (str optional) — The description of the generated commit
  • create_pr (boolean, optional) — Whether or not to create a Pull Request with that commit. Defaults to False. If revision is not set, PR is opened against the "main" branch. If revision is set and is a branch, PR is opened against this branch. If revision is set and is not a branch name (example: a commit oid), an RevisionNotFoundError is returned by the server.
  • parent_commit (str, optional) — The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. If specified and create_pr is False, the commit will fail if revision does not point to parent_commit. If specified and create_pr is True, the pull request will be created from parent_commit. Specifying parent_commit ensures the repo has not changed before committing the changes, and can be especially useful if the repo is updated / committed to concurrently.
  • run_as_future (bool, optional) — Whether or not to run this method in the background. Background jobs are run sequentially without blocking the main thread. Passing run_as_future=True will return a Future object. Defaults to False.

Returns

CommitInfo or Future

Instance of CommitInfo containing information about the newly created commit (commit hash, commit url, pr url, commit message,…). If run_as_future=True is passed, returns a Future object which will contain the result when executed.

Upload a local file (up to 50 GB) to the given repo. The upload is done through a HTTP post request, and doesn’t require git or git-lfs to be installed.

Raises the following errors:

  • HTTPError if the HuggingFace API returned an error
  • ValueError if some parameter value is invalid
  • RepositoryNotFoundError If the repository to download from cannot be found. This may be because it doesn’t exist, or because it is set to private and you do not have access.
  • RevisionNotFoundError If the revision to download from cannot be found.

upload_file assumes that the repo already exists on the Hub. If you get a Client error 404, please make sure you are authenticated and that repo_id and repo_type are set correctly. If repo does not exist, create it first using create_repo().

Example:

>>> from huggingface_hub import upload_file

>>> with open("./local/filepath", "rb") as fobj:
...     upload_file(
...         path_or_fileobj=fileobj,
...         path_in_repo="remote/file/path.h5",
...         repo_id="username/my-dataset",
...         repo_type="dataset",
...         token="my_token",
...     )
"https://huggingface.co/datasets/username/my-dataset/blob/main/remote/file/path.h5"

>>> upload_file(
...     path_or_fileobj=".\\local\\file\\path",
...     path_in_repo="remote/file/path.h5",
...     repo_id="username/my-model",
...     token="my_token",
... )
"https://huggingface.co/username/my-model/blob/main/remote/file/path.h5"

>>> upload_file(
...     path_or_fileobj=".\\local\\file\\path",
...     path_in_repo="remote/file/path.h5",
...     repo_id="username/my-model",
...     token="my_token",
...     create_pr=True,
... )
"https://huggingface.co/username/my-model/blob/refs%2Fpr%2F1/remote/file/path.h5"

upload_folder

< >

( repo_id: str folder_path: Union[str, Path] path_in_repo: Optional[str] = None commit_message: Optional[str] = None commit_description: Optional[str] = None token: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None create_pr: Optional[bool] = None parent_commit: Optional[str] = None allow_patterns: Optional[Union[List[str], str]] = None ignore_patterns: Optional[Union[List[str], str]] = None delete_patterns: Optional[Union[List[str], str]] = None multi_commits: bool = False multi_commits_verbose: bool = False run_as_future: bool = False ) CommitInfo or Future

Parameters

  • repo_id (str) — The repository to which the file will be uploaded, for example: "username/custom_transformers"
  • folder_path (str or Path) — Path to the folder to upload on the local file system
  • path_in_repo (str, optional) — Relative path of the directory in the repo, for example: "checkpoints/1fec34a/results". Will default to the root folder of the repository.
  • token (str, optional) — Authentication token, obtained with HfApi.login method. Will default to the stored token.
  • repo_type (str, optional) — Set to "dataset" or "space" if uploading to a dataset or space, None or "model" if uploading to a model. Default is None.
  • revision (str, optional) — The git revision to commit from. Defaults to the head of the "main" branch.
  • commit_message (str, optional) — The summary / title / first line of the generated commit. Defaults to: f"Upload {path_in_repo} with huggingface_hub"
  • commit_description (str optional) — The description of the generated commit
  • create_pr (boolean, optional) — Whether or not to create a Pull Request with that commit. Defaults to False. If revision is not set, PR is opened against the "main" branch. If revision is set and is a branch, PR is opened against this branch. If revision is set and is not a branch name (example: a commit oid), an RevisionNotFoundError is returned by the server. If both multi_commits and create_pr are True, the PR created in the multi-commit process is kept opened.
  • parent_commit (str, optional) — The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. If specified and create_pr is False, the commit will fail if revision does not point to parent_commit. If specified and create_pr is True, the pull request will be created from parent_commit. Specifying parent_commit ensures the repo has not changed before committing the changes, and can be especially useful if the repo is updated / committed to concurrently.
  • allow_patterns (List[str] or str, optional) — If provided, only files matching at least one pattern are uploaded.
  • ignore_patterns (List[str] or str, optional) — If provided, files matching any of the patterns are not uploaded.
  • delete_patterns (List[str] or str, optional) — If provided, remote files matching any of the patterns will be deleted from the repo while committing new files. This is useful if you don’t know which files have already been uploaded. Note: to avoid discrepancies the .gitattributes file is not deleted even if it matches the pattern.
  • multi_commits (bool) — If True, changes are pushed to a PR using a multi-commit process. Defaults to False.
  • multi_commits_verbose (bool) — If True and multi_commits is used, more information will be displayed to the user.
  • run_as_future (bool, optional) — Whether or not to run this method in the background. Background jobs are run sequentially without blocking the main thread. Passing run_as_future=True will return a Future object. Defaults to False.

Returns

CommitInfo or Future

Instance of CommitInfo containing information about the newly created commit (commit hash, commit url, pr url, commit message,…). If run_as_future=True is passed, returns a Future object which will contain the result when executed. str or Future: If multi_commits=True, returns the url of the PR created to push the changes. If run_as_future=True is passed, returns a Future object which will contain the result when executed.

Upload a local folder to the given repo. The upload is done through a HTTP requests, and doesn’t require git or git-lfs to be installed.

The structure of the folder will be preserved. Files with the same name already present in the repository will be overwritten. Others will be left untouched.

Use the allow_patterns and ignore_patterns arguments to specify which files to upload. These parameters accept either a single pattern or a list of patterns. Patterns are Standard Wildcards (globbing patterns) as documented here. If both allow_patterns and ignore_patterns are provided, both constraints apply. By default, all files from the folder are uploaded.

Use the delete_patterns argument to specify remote files you want to delete. Input type is the same as for allow_patterns (see above). If path_in_repo is also provided, the patterns are matched against paths relative to this folder. For example, upload_folder(..., path_in_repo="experiment", delete_patterns="logs/*") will delete any remote file under ./experiment/logs/. Note that the .gitattributes file will not be deleted even if it matches the patterns.

Any .git/ folder present in any subdirectory will be ignored. However, please be aware that the .gitignore file is not taken into account.

Uses HfApi.create_commit under the hood.

Raises the following errors:

  • HTTPError if the HuggingFace API returned an error
  • ValueError if some parameter value is invalid

upload_folder assumes that the repo already exists on the Hub. If you get a Client error 404, please make sure you are authenticated and that repo_id and repo_type are set correctly. If repo does not exist, create it first using create_repo().

multi_commits is experimental. Its API and behavior is subject to change in the future without prior notice.

Example:

# Upload checkpoints folder except the log files
>>> upload_folder(
...     folder_path="local/checkpoints",
...     path_in_repo="remote/experiment/checkpoints",
...     repo_id="username/my-dataset",
...     repo_type="datasets",
...     token="my_token",
...     ignore_patterns="**/logs/*.txt",
... )
# "https://huggingface.co/datasets/username/my-dataset/tree/main/remote/experiment/checkpoints"

# Upload checkpoints folder including logs while deleting existing logs from the repo
# Useful if you don't know exactly which log files have already being pushed
>>> upload_folder(
...     folder_path="local/checkpoints",
...     path_in_repo="remote/experiment/checkpoints",
...     repo_id="username/my-dataset",
...     repo_type="datasets",
...     token="my_token",
...     delete_patterns="**/logs/*.txt",
... )
"https://huggingface.co/datasets/username/my-dataset/tree/main/remote/experiment/checkpoints"

# Upload checkpoints folder while creating a PR
>>> upload_folder(
...     folder_path="local/checkpoints",
...     path_in_repo="remote/experiment/checkpoints",
...     repo_id="username/my-dataset",
...     repo_type="datasets",
...     token="my_token",
...     create_pr=True,
... )
"https://huggingface.co/datasets/username/my-dataset/tree/refs%2Fpr%2F1/remote/experiment/checkpoints"

whoami

< >

( token: Optional[str] = None )

Parameters

  • token (str, optional) — Hugging Face token. Will default to the locally saved token if not provided.

Call HF API to know “whoami”.

huggingface_hub.plan_multi_commits

< >

( operations: Iterable max_operations_per_commit: int = 50 max_upload_size_per_commit: int = 2147483648 ) Tuple[List[List[CommitOperationAdd]], List[List[CommitOperationDelete]]]

Parameters

  • operations (List of CommitOperation()) — The list of operations to split into commits.
  • max_operations_per_commit (int) — Maximum number of operations in a single commit. Defaults to 50.
  • max_upload_size_per_commit (int) — Maximum size to upload (in bytes) in a single commit. Defaults to 2GB. Files bigger than this limit are uploaded, 1 per commit.

Returns

Tuple[List[List[CommitOperationAdd]], List[List[CommitOperationDelete]]]

a tuple. First item is a list of lists of CommitOperationAdd representing the addition commits to push. The second item is a list of lists of CommitOperationDelete representing the deletion commits.

Split a list of operations in a list of commits to perform.

Implementation follows a sub-optimal (yet simple) algorithm:

  1. Delete operations are grouped together by commits of maximum max_operations_per_commits operations.
  2. All additions exceeding max_upload_size_per_commit are committed 1 by 1.
  3. All remaining additions are grouped together and split each time the max_operations_per_commit or the max_upload_size_per_commit limit is reached.

We do not try to optimize the splitting to get the lowest number of commits as this is a NP-hard problem (see bin packing problem). For our use case, it is not problematic to use a sub-optimal solution so we favored an easy-to-explain implementation.

plan_multi_commits is experimental. Its API and behavior is subject to change in the future without prior notice.

Example:

>>> from huggingface_hub import HfApi, plan_multi_commits
>>> addition_commits, deletion_commits = plan_multi_commits(
...     operations=[
...          CommitOperationAdd(...),
...          CommitOperationAdd(...),
...          CommitOperationDelete(...),
...          CommitOperationDelete(...),
...          CommitOperationAdd(...),
...     ],
... )
>>> HfApi().create_commits_on_pr(
...     repo_id="my-cool-model",
...     addition_commits=addition_commits,
...     deletion_commits=deletion_commits,
...     (...)
...     verbose=True,
... )

The initial order of the operations is not guaranteed! All deletions will be performed before additions. If you are not updating multiple times the same file, you are fine.

API Dataclasses

AccessRequest

class huggingface_hub.hf_api.AccessRequest

< >

( username: str fullname: str email: str timestamp: datetime status: Literal['pending', 'accepted', 'rejected'] fields: Optional[Dict[str, Any]] = None )

Parameters

  • username (str) — Username of the user who requested access.
  • fullname (str) — Fullname of the user who requested access.
  • email (str) — Email of the user who requested access.
  • timestamp (datetime) — Timestamp of the request.
  • status (Literal["pending", "accepted", "rejected"]) — Status of the request. Can be one of ["pending", "accepted", "rejected"].
  • fields (Dict[str, Any], optional) — Additional fields filled by the user in the gate form.

Data structure containing information about a user access request.

CommitInfo

class huggingface_hub.CommitInfo

< >

( *args commit_url: str _url: Optional[str] = None **kwargs )

Parameters

  • commit_url (str) — Url where to find the commit.
  • commit_message (str) — The summary (first line) of the commit that has been created.
  • commit_description (str) — Description of the commit that has been created. Can be empty.
  • oid (str) — Commit hash id. Example: "91c54ad1727ee830252e457677f467be0bfd8a57".
  • pr_url (str, optional) — Url to the PR that has been created, if any. Populated when create_pr=True is passed.
  • pr_revision (str, optional) — Revision of the PR that has been created, if any. Populated when create_pr=True is passed. Example: "refs/pr/1".
  • pr_num (int, optional) — Number of the PR discussion that has been created, if any. Populated when create_pr=True is passed. Can be passed as discussion_num in get_discussion_details(). Example: 1.
  • _url (str, optional) — Legacy url for str compatibility. Can be the url to the uploaded file on the Hub (if returned by upload_file()), to the uploaded folder on the Hub (if returned by upload_folder()) or to the commit on the Hub (if returned by create_commit()). Defaults to commit_url. It is deprecated to use this attribute. Please use commit_url instead.

Data structure containing information about a newly created commit.

Returned by any method that creates a commit on the Hub: create_commit(), upload_file(), upload_folder(), delete_file(), delete_folder(). It inherits from str for backward compatibility but using methods specific to str is deprecated.

DatasetInfo

class huggingface_hub.hf_api.DatasetInfo

< >

( **kwargs )

Parameters

  • id (str) — ID of dataset.
  • author (str) — Author of the dataset.
  • sha (str) — Repo SHA at this particular revision.
  • created_at (datetime, optional) — Date of creation of the repo on the Hub. Note that the lowest value is 2022-03-02T23:29:04.000Z, corresponding to the date when we began to store creation dates.
  • last_modified (datetime, optional) — Date of last commit to the repo.
  • private (bool) — Is the repo private.
  • disabled (bool, optional) — Is the repo disabled.
  • gated (Literal["auto", "manual", False], optional) — Is the repo gated. If so, whether there is manual or automatic approval.
  • downloads (int) — Number of downloads of the dataset.
  • likes (int) — Number of likes of the dataset.
  • tags (List[str]) — List of tags of the dataset.
  • card_data (DatasetCardData, optional) — Model Card Metadata as a huggingface_hub.repocard_data.DatasetCardData object.
  • siblings (List[RepoSibling]) — List of huggingface_hub.hf_api.RepoSibling objects that constitute the dataset.

Contains information about a dataset on the Hub.

Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made. In general, the more specific the query, the more information is returned. On the contrary, when listing datasets using list_datasets() only a subset of the attributes are returned.

GitRefInfo

class huggingface_hub.GitRefInfo

< >

( name: str ref: str target_commit: str )

Parameters

  • name (str) — Name of the reference (e.g. tag name or branch name).
  • ref (str) — Full git ref on the Hub (e.g. "refs/heads/main" or "refs/tags/v1.0").
  • target_commit (str) — OID of the target commit for the ref (e.g. "e7da7f221d5bf496a48136c0cd264e630fe9fcc8")

Contains information about a git reference for a repo on the Hub.

GitCommitInfo

class huggingface_hub.GitCommitInfo

< >

( commit_id: str authors: List[str] created_at: datetime title: str message: str formatted_title: Optional[str] formatted_message: Optional[str] )

Parameters

  • commit_id (str) — OID of the commit (e.g. "e7da7f221d5bf496a48136c0cd264e630fe9fcc8")
  • authors (List[str]) — List of authors of the commit.
  • created_at (datetime) — Datetime when the commit was created.
  • title (str) — Title of the commit. This is a free-text value entered by the authors.
  • message (str) — Description of the commit. This is a free-text value entered by the authors.
  • formatted_title (str) — Title of the commit formatted as HTML. Only returned if formatted=True is set.
  • formatted_message (str) — Description of the commit formatted as HTML. Only returned if formatted=True is set.

Contains information about a git commit for a repo on the Hub. Check out list_repo_commits() for more details.

GitRefs

class huggingface_hub.GitRefs

< >

( branches: List[GitRefInfo] converts: List[GitRefInfo] tags: List[GitRefInfo] pull_requests: Optional[List[GitRefInfo]] = None )

Parameters

  • branches (List[GitRefInfo]) — A list of GitRefInfo containing information about branches on the repo.
  • converts (List[GitRefInfo]) — A list of GitRefInfo containing information about “convert” refs on the repo. Converts are refs used (internally) to push preprocessed data in Dataset repos.
  • tags (List[GitRefInfo]) — A list of GitRefInfo containing information about tags on the repo.
  • pull_requests (List[GitRefInfo], optional) — A list of GitRefInfo containing information about pull requests on the repo. Only returned if include_prs=True is set.

Contains information about all git references for a repo on the Hub.

Object is returned by list_repo_refs().

ModelInfo

class huggingface_hub.hf_api.ModelInfo

< >

( **kwargs )

Parameters

  • id (str) — ID of model.
  • author (str, optional) — Author of the model.
  • sha (str, optional) — Repo SHA at this particular revision.
  • created_at (datetime, optional) — Date of creation of the repo on the Hub. Note that the lowest value is 2022-03-02T23:29:04.000Z, corresponding to the date when we began to store creation dates.
  • last_modified (datetime, optional) — Date of last commit to the repo.
  • private (bool) — Is the repo private.
  • disabled (bool, optional) — Is the repo disabled.
  • gated (Literal["auto", "manual", False], optional) — Is the repo gated. If so, whether there is manual or automatic approval.
  • downloads (int) — Number of downloads of the model.
  • likes (int) — Number of likes of the model.
  • library_name (str, optional) — Library associated with the model.
  • tags (List[str]) — List of tags of the model. Compared to card_data.tags, contains extra tags computed by the Hub (e.g. supported libraries, model’s arXiv).
  • pipeline_tag (str, optional) — Pipeline tag associated with the model.
  • mask_token (str, optional) — Mask token used by the model.
  • widget_data (Any, optional) — Widget data associated with the model.
  • model_index (Dict, optional) — Model index for evaluation.
  • config (Dict, optional) — Model configuration.
  • transformers_info (TransformersInfo, optional) — Transformers-specific info (auto class, processor, etc.) associated with the model.
  • card_data (ModelCardData, optional) — Model Card Metadata as a huggingface_hub.repocard_data.ModelCardData object.
  • siblings (List[RepoSibling]) — List of huggingface_hub.hf_api.RepoSibling objects that constitute the model.
  • spaces (List[str], optional) — List of spaces using the model.
  • safetensors (SafeTensorsInfo, optional) — Model’s safetensors information.

Contains information about a model on the Hub.

Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made. In general, the more specific the query, the more information is returned. On the contrary, when listing models using list_models() only a subset of the attributes are returned.

RepoSibling

class huggingface_hub.hf_api.RepoSibling

< >

( rfilename: str size: Optional[int] = None blob_id: Optional[str] = None lfs: Optional[BlobLfsInfo] = None )

Parameters

  • rfilename (str) — file name, relative to the repo root.
  • size (int, optional) — The file’s size, in bytes. This attribute is defined when files_metadata argument of repo_info() is set to True. It’s None otherwise.
  • blob_id (str, optional) — The file’s git OID. This attribute is defined when files_metadata argument of repo_info() is set to True. It’s None otherwise.
  • lfs (BlobLfsInfo, optional) — The file’s LFS metadata. This attribute is defined whenfiles_metadata argument of repo_info() is set to True and the file is stored with Git LFS. It’s None otherwise.

Contains basic information about a repo file inside a repo on the Hub.

All attributes of this class are optional except rfilename. This is because only the file names are returned when listing repositories on the Hub (with list_models(), list_datasets() or list_spaces()). If you need more information like file size, blob id or lfs details, you must request them specifically from one repo at a time (using model_info(), dataset_info() or space_info()) as it adds more constraints on the backend server to retrieve these.

RepoFile

class huggingface_hub.hf_api.RepoFile

< >

( **kwargs )

Parameters

  • path (str) — file path relative to the repo root.
  • size (int) — The file’s size, in bytes.
  • blob_id (str) — The file’s git OID.
  • lfs (BlobLfsInfo) — The file’s LFS metadata.
  • last_commit (LastCommitInfo, optional) — The file’s last commit metadata. Only defined if list_repo_tree() and get_paths_info() are called with expand=True.
  • security (BlobSecurityInfo, optional) — The file’s security scan metadata. Only defined if list_repo_tree() and get_paths_info() are called with expand=True.

Contains information about a file on the Hub.

RepoUrl

class huggingface_hub.RepoUrl

< >

( url: Any endpoint: Optional[str] = None )

Parameters

  • url (Any) — String value of the repo url.
  • endpoint (str, optional) — Endpoint of the Hub. Defaults to https://huggingface.co.

Raises

Subclass of str describing a repo URL on the Hub.

RepoUrl is returned by HfApi.create_repo. It inherits from str for backward compatibility. At initialization, the URL is parsed to populate properties:

  • endpoint (str)
  • namespace (Optional[str])
  • repo_name (str)
  • repo_id (str)
  • repo_type (Literal["model", "dataset", "space"])
  • url (str)

Example:

>>> RepoUrl('https://huggingface.co/gpt2')
RepoUrl('https://huggingface.co/gpt2', endpoint='https://huggingface.co', repo_type='model', repo_id='gpt2')

>>> RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co')
RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co', repo_type='dataset', repo_id='dummy_user/dummy_dataset')

>>> RepoUrl('hf://datasets/my-user/my-dataset')
RepoUrl('hf://datasets/my-user/my-dataset', endpoint='https://huggingface.co', repo_type='dataset', repo_id='user/dataset')

>>> HfApi.create_repo("dummy_model")
RepoUrl('https://huggingface.co/Wauplin/dummy_model', endpoint='https://huggingface.co', repo_type='model', repo_id='Wauplin/dummy_model')

SafetensorsRepoMetadata

class huggingface_hub.utils.SafetensorsRepoMetadata

< >

( metadata: Optional sharded: bool weight_map: Dict files_metadata: Dict )

Parameters

  • metadata (Dict, optional) — The metadata contained in the ‘model.safetensors.index.json’ file, if it exists. Only populated for sharded models.
  • sharded (bool) — Whether the repo contains a sharded model or not.
  • weight_map (Dict[str, str]) — A map of all weights. Keys are tensor names and values are filenames of the files containing the tensors.
  • files_metadata (Dict[str, SafetensorsFileMetadata]) — A map of all files metadata. Keys are filenames and values are the metadata of the corresponding file, as a SafetensorsFileMetadata object.
  • parameter_count (Dict[str, int]) — A map of the number of parameters per data type. Keys are data types and values are the number of parameters of that data type.

Metadata for a Safetensors repo.

A repo is considered to be a Safetensors repo if it contains either a ‘model.safetensors’ weight file (non-shared model) or a ‘model.safetensors.index.json’ index file (sharded model) at its root.

This class is returned by get_safetensors_metadata().

For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.

SafetensorsFileMetadata

class huggingface_hub.utils.SafetensorsFileMetadata

< >

( metadata: Dict tensors: Dict )

Parameters

  • metadata (Dict) — The metadata contained in the file.
  • tensors (Dict[str, TensorInfo]) — A map of all tensors. Keys are tensor names and values are information about the corresponding tensor, as a TensorInfo object.
  • parameter_count (Dict[str, int]) — A map of the number of parameters per data type. Keys are data types and values are the number of parameters of that data type.

Metadata for a Safetensors file hosted on the Hub.

This class is returned by parse_safetensors_file_metadata().

For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.

SpaceInfo

class huggingface_hub.hf_api.SpaceInfo

< >

( **kwargs )

Parameters

  • id (str) — ID of the Space.
  • author (str, optional) — Author of the Space.
  • sha (str, optional) — Repo SHA at this particular revision.
  • created_at (datetime, optional) — Date of creation of the repo on the Hub. Note that the lowest value is 2022-03-02T23:29:04.000Z, corresponding to the date when we began to store creation dates.
  • last_modified (datetime, optional) — Date of last commit to the repo.
  • private (bool) — Is the repo private.
  • gated (Literal["auto", "manual", False], optional) — Is the repo gated. If so, whether there is manual or automatic approval.
  • disabled (bool, optional) — Is the Space disabled.
  • host (str, optional) — Host URL of the Space.
  • subdomain (str, optional) — Subdomain of the Space.
  • likes (int) — Number of likes of the Space.
  • tags (List[str]) — List of tags of the Space.
  • siblings (List[RepoSibling]) — List of huggingface_hub.hf_api.RepoSibling objects that constitute the Space.
  • card_data (SpaceCardData, optional) — Space Card Metadata as a huggingface_hub.repocard_data.SpaceCardData object.
  • runtime (SpaceRuntime, optional) — Space runtime information as a huggingface_hub.hf_api.SpaceRuntime object.
  • sdk (str, optional) — SDK used by the Space.
  • models (List[str], optional) — List of models used by the Space.
  • datasets (List[str], optional) — List of datasets used by the Space.

Contains information about a Space on the Hub.

Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made. In general, the more specific the query, the more information is returned. On the contrary, when listing spaces using list_spaces() only a subset of the attributes are returned.

TensorInfo

class huggingface_hub.utils.TensorInfo

< >

( dtype: Literal shape: List data_offsets: Tuple )

Parameters

  • dtype (str) — The data type of the tensor (“F64”, “F32”, “F16”, “BF16”, “I64”, “I32”, “I16”, “I8”, “U8”, “BOOL”).
  • shape (List[int]) — The shape of the tensor.
  • data_offsets (Tuple[int, int]) — The offsets of the data in the file as a tuple [BEGIN, END].
  • parameter_count (int) — The number of parameters in the tensor.

Information about a tensor.

For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.

User

class huggingface_hub.User

< >

( **kwargs )

Parameters

  • avatar_url (str) — URL of the user’s avatar.
  • username (str) — Name of the user on the Hub (unique).
  • fullname (str) — User’s full name.
  • is_pro (bool, optional) — Whether the user is a pro user.
  • num_models (int, optional) — Number of models created by the user.
  • num_datasets (int, optional) — Number of datasets created by the user.
  • num_spaces (int, optional) — Number of spaces created by the user.
  • num_discussions (int, optional) — Number of discussions initiated by the user.
  • num_papers (int, optional) — Number of papers authored by the user.
  • num_upvotes (int, optional) — Number of upvotes received by the user.
  • num_likes (int, optional) — Number of likes given by the user.
  • is_following (bool, optional) — Whether the authenticated user is following this user.
  • details (str, optional) — User’s details.

Contains information about a user on the Hub.

UserLikes

class huggingface_hub.UserLikes

< >

( user: str total: int datasets: List[str] models: List[str] spaces: List[str] )

Parameters

  • user (str) — Name of the user for which we fetched the likes.
  • total (int) — Total number of likes.
  • datasets (List[str]) — List of datasets liked by the user (as repo_ids).
  • models (List[str]) — List of models liked by the user (as repo_ids).
  • spaces (List[str]) — List of spaces liked by the user (as repo_ids).

Contains information about a user likes on the Hub.

CommitOperation

Below are the supported values for CommitOperation():

class huggingface_hub.CommitOperationAdd

< >

( path_in_repo: str path_or_fileobj: Union )

Parameters

  • path_in_repo (str) — Relative filepath in the repo, for example: "checkpoints/1fec34a/weights.bin"
  • path_or_fileobj (str, Path, bytes, or BinaryIO) — Either:
    • a path to a local file (as str or pathlib.Path) to upload
    • a buffer of bytes (bytes) holding the content of the file to upload
    • a “file object” (subclass of io.BufferedIOBase), typically obtained with open(path, "rb"). It must support seek() and tell() methods.

Raises

ValueError

  • ValueError — If path_or_fileobj is not one of str, Path, bytes or io.BufferedIOBase.
  • ValueError — If path_or_fileobj is a str or Path but not a path to an existing file.
  • ValueError — If path_or_fileobj is a io.BufferedIOBase but it doesn’t support both seek() and tell().

Data structure holding necessary info to upload a file to a repository on the Hub.

as_file

< >

( with_tqdm: bool = False )

Parameters

  • with_tqdm (bool, optional, defaults to False) — If True, iterating over the file object will display a progress bar. Only works if the file-like object is a path to a file. Pure bytes and buffers are not supported.

A context manager that yields a file-like object allowing to read the underlying data behind path_or_fileobj.

Example:

>>> operation = CommitOperationAdd(
...        path_in_repo="remote/dir/weights.h5",
...        path_or_fileobj="./local/weights.h5",
... )
CommitOperationAdd(path_in_repo='remote/dir/weights.h5', path_or_fileobj='./local/weights.h5')

>>> with operation.as_file() as file:
...     content = file.read()

>>> with operation.as_file(with_tqdm=True) as file:
...     while True:
...         data = file.read(1024)
...         if not data:
...              break
config.json: 100%|█████████████████████████| 8.19k/8.19k [00:02<00:00, 3.72kB/s]

>>> with operation.as_file(with_tqdm=True) as file:
...     requests.put(..., data=file)
config.json: 100%|█████████████████████████| 8.19k/8.19k [00:02<00:00, 3.72kB/s]

b64content

< >

( )

The base64-encoded content of path_or_fileobj

Returns: bytes

class huggingface_hub.CommitOperationDelete

< >

( path_in_repo: str is_folder: Union = 'auto' )

Parameters

  • path_in_repo (str) — Relative filepath in the repo, for example: "checkpoints/1fec34a/weights.bin" for a file or "checkpoints/1fec34a/" for a folder.
  • is_folder (bool or Literal["auto"], optional) — Whether the Delete Operation applies to a folder or not. If “auto”, the path type (file or folder) is guessed automatically by looking if path ends with a ”/” (folder) or not (file). To explicitly set the path type, you can set is_folder=True or is_folder=False.

Data structure holding necessary info to delete a file or a folder from a repository on the Hub.

class huggingface_hub.CommitOperationCopy

< >

( src_path_in_repo: str path_in_repo: str src_revision: Optional = None )

Parameters

  • src_path_in_repo (str) — Relative filepath in the repo of the file to be copied, e.g. "checkpoints/1fec34a/weights.bin".
  • path_in_repo (str) — Relative filepath in the repo where to copy the file, e.g. "checkpoints/1fec34a/weights_copy.bin".
  • src_revision (str, optional) — The git revision of the file to be copied. Can be any valid git revision. Default to the target commit revision.

Data structure holding necessary info to copy a file in a repository on the Hub.

Limitations:

  • Only LFS files can be copied. To copy a regular file, you need to download it locally and re-upload it
  • Cross-repository copies are not supported.

Note: you can combine a CommitOperationCopy and a CommitOperationDelete to rename an LFS file on the Hub.

CommitScheduler

class huggingface_hub.CommitScheduler

< >

( repo_id: str folder_path: Union every: Union = 5 path_in_repo: Optional = None repo_type: Optional = None revision: Optional = None private: bool = False token: Optional = None allow_patterns: Union = None ignore_patterns: Union = None squash_history: bool = False hf_api: Optional = None )

Parameters

  • repo_id (str) — The id of the repo to commit to.
  • folder_path (str or Path) — Path to the local folder to upload regularly.
  • every (int or float, optional) — The number of minutes between each commit. Defaults to 5 minutes.
  • path_in_repo (str, optional) — Relative path of the directory in the repo, for example: "checkpoints/". Defaults to the root folder of the repository.
  • repo_type (str, optional) — The type of the repo to commit to. Defaults to model.
  • revision (str, optional) — The revision of the repo to commit to. Defaults to main.
  • private (bool, optional) — Whether to make the repo private. Defaults to False. This value is ignored if the repo already exist.
  • token (str, optional) — The token to use to commit to the repo. Defaults to the token saved on the machine.
  • allow_patterns (List[str] or str, optional) — If provided, only files matching at least one pattern are uploaded.
  • ignore_patterns (List[str] or str, optional) — If provided, files matching any of the patterns are not uploaded.
  • squash_history (bool, optional) — Whether to squash the history of the repo after each commit. Defaults to False. Squashing commits is useful to avoid degraded performances on the repo when it grows too large.
  • hf_api (HfApi, optional) — The HfApi client to use to commit to the Hub. Can be set with custom settings (user agent, token,…).

Scheduler to upload a local folder to the Hub at regular intervals (e.g. push to hub every 5 minutes).

The scheduler is started when instantiated and run indefinitely. At the end of your script, a last commit is triggered. Checkout the upload guide to learn more about how to use it.

Example:

>>> from pathlib import Path
>>> from huggingface_hub import CommitScheduler

# Scheduler uploads every 10 minutes
>>> csv_path = Path("watched_folder/data.csv")
>>> CommitScheduler(repo_id="test_scheduler", repo_type="dataset", folder_path=csv_path.parent, every=10)

>>> with csv_path.open("a") as f:
...     f.write("first line")

# Some time later (...)
>>> with csv_path.open("a") as f:
...     f.write("second line")

push_to_hub

< >

( )

Push folder to the Hub and return the commit info.

This method is not meant to be called directly. It is run in the background by the scheduler, respecting a queue mechanism to avoid concurrent commits. Making a direct call to the method might lead to concurrency issues.

The default behavior of push_to_hub is to assume an append-only folder. It lists all files in the folder and uploads only changed files. If no changes are found, the method returns without committing anything. If you want to change this behavior, you can inherit from CommitScheduler and override this method. This can be useful for example to compress data together in a single file before committing. For more details and examples, check out our integration guide.

stop

< >

( )

Stop the scheduler.

A stopped scheduler cannot be restarted. Mostly for tests purposes.

trigger

< >

( )

Trigger a push_to_hub and return a future.

This method is automatically called every every minutes. You can also call it manually to trigger a commit immediately, without waiting for the next scheduled commit.

Search helpers

Some helpers to filter repositories on the Hub are available in the huggingface_hub package.

DatasetFilter

class huggingface_hub.DatasetFilter

< >

( author: Optional = None benchmark: Union = None dataset_name: Optional = None language_creators: Union = None language: Union = None multilinguality: Union = None size_categories: Union = None task_categories: Union = None task_ids: Union = None )

Parameters

  • author (str, optional) — A string that can be used to identify datasets on the Hub by the original uploader (author or organization), such as facebook or huggingface.
  • benchmark (str or List, optional) — A string or list of strings that can be used to identify datasets on the Hub by their official benchmark.
  • dataset_name (str, optional) — A string or list of strings that can be used to identify datasets on the Hub by its name, such as SQAC or wikineural
  • language_creators (str or List, optional) — A string or list of strings that can be used to identify datasets on the Hub with how the data was curated, such as crowdsourced or machine_generated.
  • language (str or List, optional) — A string or list of strings representing a two-character language to filter datasets by on the Hub.
  • multilinguality (str or List, optional) — A string or list of strings representing a filter for datasets that contain multiple languages.
  • size_categories (str or List, optional) — A string or list of strings that can be used to identify datasets on the Hub by the size of the dataset such as 100K<n<1M or 1M<n<10M.
  • task_categories (str or List, optional) — A string or list of strings that can be used to identify datasets on the Hub by the designed task, such as audio_classification or named_entity_recognition.
  • task_ids (str or List, optional) — A string or list of strings that can be used to identify datasets on the Hub by the specific task such as speech_emotion_recognition or paraphrase.

A class that converts human-readable dataset search parameters into ones compatible with the REST API. For all parameters capitalization does not matter.

The DatasetFilter class is deprecated and will be removed in huggingface_hub>=0.24. Please pass the filter parameters as keyword arguments directly to list_datasets().

Examples:

>>> from huggingface_hub import DatasetFilter

>>> # Using author
>>> new_filter = DatasetFilter(author="facebook")

>>> # Using benchmark
>>> new_filter = DatasetFilter(benchmark="raft")

>>> # Using dataset_name
>>> new_filter = DatasetFilter(dataset_name="wikineural")

>>> # Using language_creator
>>> new_filter = DatasetFilter(language_creator="crowdsourced")

>>> # Using language
>>> new_filter = DatasetFilter(language="en")

>>> # Using multilinguality
>>> new_filter = DatasetFilter(multilinguality="multilingual")

>>> # Using size_categories
>>> new_filter = DatasetFilter(size_categories="100K<n<1M")

>>> # Using task_categories
>>> new_filter = DatasetFilter(task_categories="audio_classification")

>>> # Using task_ids
>>> new_filter = DatasetFilter(task_ids="paraphrase")

ModelFilter

class huggingface_hub.ModelFilter

< >

( author: Optional = None library: Union = None language: Union = None model_name: Optional = None task: Union = None trained_dataset: Union = None tags: Union = None )

Parameters

  • author (str, optional) — A string that can be used to identify models on the Hub by the original uploader (author or organization), such as facebook or huggingface.
  • library (str or List, optional) — A string or list of strings of foundational libraries models were originally trained from, such as pytorch, tensorflow, or allennlp.
  • language (str or List, optional) — A string or list of strings of languages, both by name and country code, such as “en” or “English”
  • model_name (str, optional) — A string that contain complete or partial names for models on the Hub, such as “bert” or “bert-base-cased”
  • task (str or List, optional) — A string or list of strings of tasks models were designed for, such as: “fill-mask” or “automatic-speech-recognition”
  • tags (str or List, optional) — A string tag or a list of tags to filter models on the Hub by, such as text-generation or spacy.
  • trained_dataset (str or List, optional) — A string tag or a list of string tags of the trained dataset for a model on the Hub.

A class that converts human-readable model search parameters into ones compatible with the REST API. For all parameters capitalization does not matter.

The ModelFilter class is deprecated and will be removed in huggingface_hub>=0.24. Please pass the filter parameters as keyword arguments directly to list_models().

Examples:

>>> from huggingface_hub import ModelFilter

>>> # For the author_or_organization
>>> new_filter = ModelFilter(author_or_organization="facebook")

>>> # For the library
>>> new_filter = ModelFilter(library="pytorch")

>>> # For the language
>>> new_filter = ModelFilter(language="french")

>>> # For the model_name
>>> new_filter = ModelFilter(model_name="bert")

>>> # For the task
>>> new_filter = ModelFilter(task="text-classification")

>>> from huggingface_hub import HfApi

>>> api = HfApi()
# To list model tags

>>> new_filter = ModelFilter(tags="benchmark:raft")

>>> # Related to the dataset
>>> new_filter = ModelFilter(trained_dataset="common_voice")