HfApi Client
Below is the documentation for the HfApi
class, which serves as a Python wrapper for the Hugging Face Hub’s API.
All methods from the HfApi
are also accessible from the package’s root directly. Both approaches are detailed below.
Using the root method is more straightforward but the HfApi class gives you more flexibility.
In particular, you can pass a token that will be reused in all HTTP calls. This is different
than huggingface-cli login
or login() as the token is not persisted on the machine.
It is also possible to provide a different endpoint or configure a custom user-agent.
from huggingface_hub import HfApi, list_models
# Use root method
models = list_models()
# Or configure a HfApi client
hf_api = HfApi(
endpoint="https://huggingface.co", # Can be a Private Hub endpoint.
token="hf_xxx", # Token is not persisted on the machine.
)
models = hf_api.list_models()
HfApi
class huggingface_hub.HfApi
< source >( endpoint: Optional[str] = None token: Optional[str] = None library_name: Optional[str] = None library_version: Optional[str] = None user_agent: Union[Dict, str, None] = None )
accept_access_request
< source >( repo_id: str user: str repo_type: Optional[str] = None token: Optional[str] = None )
Parameters
- repo_id (
str
) — The id of the repo to accept access request for. - user (
str
) — The username of the user which access request should be accepted. - repo_type (
str
, optional) — The type of the repo to accept access request for. Must be one ofmodel
,dataset
orspace
. Defaults tomodel
. - token (
str
, optional) — A valid authentication token (see https://huggingface.co/settings/token).
Raises
HTTPError
HTTPError
— HTTP 400 if the repo is not gated.HTTPError
— HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t havewrite
oradmin
role in the organization the repo belongs to or if you passed aread
token.HTTPError
— HTTP 404 if the user does not exist on the Hub.HTTPError
— HTTP 404 if the user access request cannot be found.HTTPError
— HTTP 404 if the user access request is already in the accepted list.
Accept an access request from a user for a given gated repo.
Once the request is accepted, the user will be able to download any file of the repo and access the community tab. If the approval mode is automatic, you don’t have to accept requests manually. An accepted request can be cancelled or rejected at any time using cancel_access_request() and reject_access_request().
For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.
add_collection_item
< source >( collection_slug: str item_id: str item_type: CollectionItemType_T note: Optional[str] = None exists_ok: bool = False token: Optional[str] = None )
Parameters
- collection_slug (
str
) — Slug of the collection to update. Example:"TheBloke/recent-models-64f9a55bb3115b4f513ec026"
. - item_id (
str
) — ID of the item to add to the collection. It can be the ID of a repo on the Hub (e.g."facebook/bart-large-mnli"
) or a paper id (e.g."2307.09288"
). - item_type (
str
) — Type of the item to add. Can be one of"model"
,"dataset"
,"space"
or"paper"
. - note (
str
, optional) — A note to attach to the item in the collection. The maximum size for a note is 500 characters. - exists_ok (
bool
, optional) — IfTrue
, do not raise an error if item already exists. - token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Add an item to a collection on the Hub.
Returns: Collection
Example:
>>> from huggingface_hub import add_collection_item
>>> collection = add_collection_item(
... collection_slug="davanstrien/climate-64f99dc2a5067f6b65531bab",
... item_id="pierre-loic/climate-news-articles",
... item_type="dataset"
... )
>>> collection.items[-1].item_id
"pierre-loic/climate-news-articles"
# ^item got added to the collection on last position
# Add item with a note
>>> add_collection_item(
... collection_slug="davanstrien/climate-64f99dc2a5067f6b65531bab",
... item_id="datasets/climate_fever",
... item_type="dataset"
... note="This dataset adopts the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet."
... )
(...)
add_space_secret
< source >( repo_id: str key: str value: str description: Optional[str] = None token: Optional[str] = None )
Parameters
- repo_id (
str
) — ID of the repo to update. Example:"bigcode/in-the-stack"
. - key (
str
) — Secret key. Example:"GITHUB_API_KEY"
- value (
str
) — Secret value. Example:"your_github_api_key"
. - description (
str
, optional) — Secret description. Example:"Github API key to access the Github API"
. - token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Adds or updates a secret in a Space.
Secrets allow to set secret keys or tokens to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.
add_space_variable
< source >( repo_id: str key: str value: str description: Optional[str] = None token: Optional[str] = None )
Parameters
- repo_id (
str
) — ID of the repo to update. Example:"bigcode/in-the-stack"
. - key (
str
) — Variable key. Example:"MODEL_REPO_ID"
- value (
str
) — Variable value. Example:"the_model_repo_id"
. - description (
str
) — Description of the variable. Example:"Model Repo ID of the implemented model"
. - token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Adds or updates a variable in a Space.
Variables allow to set environment variables to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables
cancel_access_request
< source >( repo_id: str user: str repo_type: Optional[str] = None token: Optional[str] = None )
Parameters
- repo_id (
str
) — The id of the repo to cancel access request for. - user (
str
) — The username of the user which access request should be cancelled. - repo_type (
str
, optional) — The type of the repo to cancel access request for. Must be one ofmodel
,dataset
orspace
. Defaults tomodel
. - token (
str
, optional) — A valid authentication token (see https://huggingface.co/settings/token).
Raises
HTTPError
HTTPError
— HTTP 400 if the repo is not gated.HTTPError
— HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t havewrite
oradmin
role in the organization the repo belongs to or if you passed aread
token.HTTPError
— HTTP 404 if the user does not exist on the Hub.HTTPError
— HTTP 404 if the user access request cannot be found.HTTPError
— HTTP 404 if the user access request is already in the pending list.
Cancel an access request from a user for a given gated repo.
A cancelled request will go back to the pending list and the user will lose access to the repo.
For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.
change_discussion_status
< source >( repo_id: str discussion_num: int new_status: Literal[('open', 'closed')] token: Optional[str] = None comment: Optional[str] = None repo_type: Optional[str] = None ) → DiscussionStatusChange
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - discussion_num (
int
) — The number of the Discussion or Pull Request . Must be a strictly positive integer. - new_status (
str
) — The new status for the discussion, either"open"
or"closed"
. - comment (
str
, optional) — An optional comment to post with the status change. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token)
Returns
the status change event
Closes or re-opens a Discussion or Pull Request.
Examples:
>>> new_title = "New title, fixing a typo"
>>> HfApi().rename_discussion(
... repo_id="username/repo_name",
... discussion_num=34
... new_title=new_title
... )
# DiscussionStatusChange(id='deadbeef0000000', type='status-change', ...)
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
comment_discussion
< source >( repo_id: str discussion_num: int comment: str token: Optional[str] = None repo_type: Optional[str] = None ) → DiscussionComment
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - discussion_num (
int
) — The number of the Discussion or Pull Request . Must be a strictly positive integer. - comment (
str
) — The content of the comment to create. Comments support markdown formatting. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token)
Returns
the newly created comment
Creates a new comment on the given Discussion.
Examples:
>>> comment = """
... Hello @otheruser!
...
... # This is a title
...
... **This is bold**, *this is italic* and ~this is strikethrough~
... And [this](http://url) is a link
... """
>>> HfApi().comment_discussion(
... repo_id="username/repo_name",
... discussion_num=34
... comment=comment
... )
# DiscussionComment(id='deadbeef0000000', type='comment', ...)
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
create_branch
< source >( repo_id: str branch: str revision: Optional[str] = None token: Optional[str] = None repo_type: Optional[str] = None exist_ok: bool = False )
Parameters
- repo_id (
str
) — The repository in which the branch will be created. Example:"user/my-cool-model"
. - branch (
str
) — The name of the branch to create. - revision (
str
, optional) — The git revision to create the branch from. It can be a branch name or the OID/SHA of a commit, as a hexadecimal string. Defaults to the head of the"main"
branch. - token (
str
, optional) — Authentication token. Will default to the stored token. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if creating a branch on a dataset or space,None
or"model"
if tagging a model. Default isNone
. - exist_ok (
bool
, optional, defaults toFalse
) — IfTrue
, do not raise an error if branch already exists.
Raises
RepositoryNotFoundError or BadRequestError or HfHubHTTPError
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
- BadRequestError —
If invalid reference for a branch. Ex:
refs/pr/5
or ‘refs/foo/bar’. - HfHubHTTPError —
If the branch already exists on the repo (error 409) and
exist_ok
is set toFalse
.
Create a new branch for a repo on the Hub, starting from the specified revision (defaults to main
).
To find a revision suiting your needs, you can use list_repo_refs() or list_repo_commits().
create_collection
< source >( title: str namespace: Optional[str] = None description: Optional[str] = None private: bool = False exists_ok: bool = False token: Optional[str] = None )
Parameters
- title (
str
) — Title of the collection to create. Example:"Recent models"
. - namespace (
str
, optional) — Namespace of the collection to create (username or org). Will default to the owner name. - description (
str
, optional) — Description of the collection to create. - private (
bool
, optional) — Whether the collection should be private or not. Defaults toFalse
(i.e. public collection). - exists_ok (
bool
, optional) — IfTrue
, do not raise an error if collection already exists. - token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Create a new Collection on the Hub.
Returns: Collection
create_commit
< source >( repo_id: str operations: Iterable[CommitOperation] commit_message: str commit_description: Optional[str] = None token: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None create_pr: Optional[bool] = None num_threads: int = 5 parent_commit: Optional[str] = None run_as_future: bool = False ) → CommitInfo or Future
Parameters
- repo_id (
str
) — The repository in which the commit will be created, for example:"username/custom_transformers"
- operations (
Iterable
ofCommitOperation()
) — An iterable of operations to include in the commit, either:- CommitOperationAdd to upload a file
- CommitOperationDelete to delete a file
- CommitOperationCopy to copy a file
Operation objects will be mutated to include information relative to the upload. Do not reuse the same objects for multiple commits.
- commit_message (
str
) — The summary (first line) of the commit that will be created. - commit_description (
str
, optional) — The description of the commit that will be created - token (
str
, optional) — Authentication token, obtained withHfApi.login
method. Will default to the stored token. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. - revision (
str
, optional) — The git revision to commit from. Defaults to the head of the"main"
branch. - create_pr (
boolean
, optional) — Whether or not to create a Pull Request with that commit. Defaults toFalse
. Ifrevision
is not set, PR is opened against the"main"
branch. Ifrevision
is set and is a branch, PR is opened against this branch. Ifrevision
is set and is not a branch name (example: a commit oid), anRevisionNotFoundError
is returned by the server. - num_threads (
int
, optional) — Number of concurrent threads for uploading files. Defaults to 5. Setting it to 2 means at most 2 files will be uploaded concurrently. - parent_commit (
str
, optional) — The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. If specified andcreate_pr
isFalse
, the commit will fail ifrevision
does not point toparent_commit
. If specified andcreate_pr
isTrue
, the pull request will be created fromparent_commit
. Specifyingparent_commit
ensures the repo has not changed before committing the changes, and can be especially useful if the repo is updated / committed to concurrently. - run_as_future (
bool
, optional) — Whether or not to run this method in the background. Background jobs are run sequentially without blocking the main thread. Passingrun_as_future=True
will return a Future object. Defaults toFalse
.
Returns
CommitInfo or Future
Instance of CommitInfo containing information about the newly created commit (commit hash, commit
url, pr url, commit message,…). If run_as_future=True
is passed, returns a Future object which will
contain the result when executed.
Raises
ValueError
or RepositoryNotFoundError
ValueError
— If commit message is empty.ValueError
— If parent commit is not a valid commit OID.ValueError
— If the Hub API returns an HTTP 400 error (bad request)ValueError
— Ifcreate_pr
isTrue
and revision is neitherNone
nor"main"
.- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
Creates a commit in the given repo, deleting & uploading files as needed.
The input list of CommitOperation
will be mutated during the commit process. Do not reuse the same objects
for multiple commits.
create_commit
assumes that the repo already exists on the Hub. If you get a
Client error 404, please make sure you are authenticated and that repo_id
and
repo_type
are set correctly. If repo does not exist, create it first using
create_repo().
create_commit
is limited to 25k LFS files and a 1GB payload for regular files.
create_commits_on_pr
< source >( repo_id: str addition_commits: List[List[CommitOperationAdd]] deletion_commits: List[List[CommitOperationDelete]] commit_message: str commit_description: Optional[str] = None token: Optional[str] = None repo_type: Optional[str] = None merge_pr: bool = True num_threads: int = 5 verbose: bool = False ) → str
Parameters
- repo_id (
str
) — The repository in which the commits will be pushed. Example:"username/my-cool-model"
. - addition_commits (
List
ofList
of CommitOperationAdd) — A list containing lists of CommitOperationAdd. Each sublist will result in a commit on the PR.deletion_commits — A list containing lists of CommitOperationDelete. Each sublist will result in a commit on the PR. Deletion commits are pushed before addition commits.
- commit_message (
str
) — The summary (first line) of the commit that will be created. Will also be the title of the PR. - commit_description (
str
, optional) — The description of the commit that will be created. The description will be added to the PR. - token (
str
, optional) — Authentication token, obtained withHfApi.login
method. Will default to the stored token. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. - merge_pr (
bool
) — If set toTrue
, the Pull Request is merged at the end of the process. Defaults toTrue
. - num_threads (
int
, optional) — Number of concurrent threads for uploading files. Defaults to 5. - verbose (
bool
) — If set toTrue
, process will run on verbose mode i.e. print information about the ongoing tasks. Defaults toFalse
.
Returns
str
URL to the created PR.
Raises
MultiCommitException
MultiCommitException
— If an unexpected issue occur in the process: empty commits, unexpected commits in a PR, unexpected PR description, etc.
Push changes to the Hub in multiple commits.
Commits are pushed to a draft PR branch. If the upload fails or gets interrupted, it can be resumed. Progress
is tracked in the PR description. At the end of the process, the PR is set as open and the title is updated to
match the initial commit message. If merge_pr=True
is passed, the PR is merged automatically.
All deletion commits are pushed first, followed by the addition commits. The order of the commits is not guaranteed as we might implement parallel commits in the future. Be sure that your are not updating several times the same file.
create_commits_on_pr
is experimental. Its API and behavior is subject to change in the future without prior notice.
create_commits_on_pr
assumes that the repo already exists on the Hub. If you get a Client error 404, please
make sure you are authenticated and that repo_id
and repo_type
are set correctly. If repo does not exist,
create it first using create_repo().
Example:
>>> from huggingface_hub import HfApi, plan_multi_commits
>>> addition_commits, deletion_commits = plan_multi_commits(
... operations=[
... CommitOperationAdd(...),
... CommitOperationAdd(...),
... CommitOperationDelete(...),
... CommitOperationDelete(...),
... CommitOperationAdd(...),
... ],
... )
>>> HfApi().create_commits_on_pr(
... repo_id="my-cool-model",
... addition_commits=addition_commits,
... deletion_commits=deletion_commits,
... (...)
... verbose=True,
... )
create_discussion
< source >( repo_id: str title: str token: Optional[str] = None description: Optional[str] = None repo_type: Optional[str] = None pull_request: bool = False )
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - title (
str
) — The title of the discussion. It can be up to 200 characters long, and must be at least 3 characters long. Leading and trailing whitespaces will be stripped. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token) - description (
str
, optional) — An optional description for the Pull Request. Defaults to"Discussion opened with the huggingface_hub Python library"
- pull_request (
bool
, optional) — Whether to create a Pull Request or discussion. IfTrue
, creates a Pull Request. IfFalse
, creates a discussion. Defaults toFalse
. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
.
Creates a Discussion or Pull Request.
Pull Requests created programmatically will be in "draft"
status.
Creating a Pull Request with changes can also be done at once with HfApi.create_commit().
Returns: DiscussionWithDetails
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
create_inference_endpoint
< source >( name: str repository: str framework: str accelerator: str instance_size: str instance_type: str region: str vendor: str account_id: Optional[str] = None min_replica: int = 0 max_replica: int = 1 revision: Optional[str] = None task: Optional[str] = None custom_image: Optional[Dict] = None type: InferenceEndpointType = <InferenceEndpointType.PROTECTED: 'protected'> namespace: Optional[str] = None token: Optional[str] = None ) → InferenceEndpoint
Parameters
- name (
str
) — The unique name for the new Inference Endpoint. - repository (
str
) — The name of the model repository associated with the Inference Endpoint (e.g."gpt2"
). - framework (
str
) — The machine learning framework used for the model (e.g."custom"
). - accelerator (
str
) — The hardware accelerator to be used for inference (e.g."cpu"
). - instance_size (
str
) — The size or type of the instance to be used for hosting the model (e.g."large"
). - instance_type (
str
) — The cloud instance type where the Inference Endpoint will be deployed (e.g."c6i"
). - region (
str
) — The cloud region in which the Inference Endpoint will be created (e.g."us-east-1"
). - vendor (
str
) — The cloud provider or vendor where the Inference Endpoint will be hosted (e.g."aws"
). - account_id (
str
, optional) — The account ID used to link a VPC to a private Inference Endpoint (if applicable). - min_replica (
int
, optional) — The minimum number of replicas (instances) to keep running for the Inference Endpoint. Defaults to 0. - max_replica (
int
, optional) — The maximum number of replicas (instances) to scale to for the Inference Endpoint. Defaults to 1. - revision (
str
, optional) — The specific model revision to deploy on the Inference Endpoint (e.g."6c0e6080953db56375760c0471a8c5f2929baf11"
). - task (
str
, optional) — The task on which to deploy the model (e.g."text-classification"
). - custom_image (
Dict
, optional) — A custom Docker image to use for the Inference Endpoint. This is useful if you want to deploy an Inference Endpoint running on thetext-generation-inference
(TGI) framework (see examples). - type ([`InferenceEndpointType]
, *optional*) -- The type of the Inference Endpoint, which can be
“protected”(default),
“public”or
“private”`. - namespace (
str
, optional) — The namespace where the Inference Endpoint will be created. Defaults to the current user’s namespace. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token).
Returns
information about the updated Inference Endpoint.
Create a new Inference Endpoint.
Example:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> create_inference_endpoint(
... "my-endpoint-name",
... repository="gpt2",
... framework="pytorch",
... task="text-generation",
... accelerator="cpu",
... vendor="aws",
... region="us-east-1",
... type="protected",
... instance_size="medium",
... instance_type="c6i",
... )
>>> endpoint
InferenceEndpoint(name='my-endpoint-name', status="pending",...)
# Run inference on the endpoint
>>> endpoint.client.text_generation(...)
"..."
# Start an Inference Endpoint running Zephyr-7b-beta on TGI
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> create_inference_endpoint(
... "aws-zephyr-7b-beta-0486",
... repository="HuggingFaceH4/zephyr-7b-beta",
... framework="pytorch",
... task="text-generation",
... accelerator="gpu",
... vendor="aws",
... region="us-east-1",
... type="protected",
... instance_size="medium",
... instance_type="g5.2xlarge",
... custom_image={
... "health_route": "/health",
... "env": {
... "MAX_BATCH_PREFILL_TOKENS": "2048",
... "MAX_INPUT_LENGTH": "1024",
... "MAX_TOTAL_TOKENS": "1512",
... "MODEL_ID": "/repository"
... },
... "url": "ghcr.io/huggingface/text-generation-inference:1.1.0",
... },
... )
create_pull_request
< source >( repo_id: str title: str token: Optional[str] = None description: Optional[str] = None repo_type: Optional[str] = None )
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - title (
str
) — The title of the discussion. It can be up to 200 characters long, and must be at least 3 characters long. Leading and trailing whitespaces will be stripped. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token) - description (
str
, optional) — An optional description for the Pull Request. Defaults to"Discussion opened with the huggingface_hub Python library"
- repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
.
Creates a Pull Request . Pull Requests created programmatically will be in "draft"
status.
Creating a Pull Request with changes can also be done at once with HfApi.create_commit();
This is a wrapper around HfApi.create_discussion().
Returns: DiscussionWithDetails
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
create_repo
< source >( repo_id: str token: Optional[str] = None private: bool = False repo_type: Optional[str] = None exist_ok: bool = False space_sdk: Optional[str] = None space_hardware: Optional[SpaceHardware] = None space_storage: Optional[SpaceStorage] = None space_sleep_time: Optional[int] = None space_secrets: Optional[List[Dict[str, str]]] = None space_variables: Optional[List[Dict[str, str]]] = None ) → RepoUrl
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token) - private (
bool
, optional, defaults toFalse
) — Whether the model repo should be private. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. - exist_ok (
bool
, optional, defaults toFalse
) — IfTrue
, do not raise an error if repo already exists. - space_sdk (
str
, optional) — Choice of SDK to use if repo_type is “space”. Can be “streamlit”, “gradio”, “docker”, or “static”. - space_hardware (
SpaceHardware
orstr
, optional) — Choice of Hardware if repo_type is “space”. See SpaceHardware for a complete list. - space_storage (
SpaceStorage
orstr
, optional) — Choice of persistent storage tier. Example:"small"
. See SpaceStorage for a complete list. - space_sleep_time (
int
, optional) — Number of seconds of inactivity to wait before a Space is put to sleep. Set to-1
if you don’t want your Space to sleep (default behavior for upgraded hardware). For free hardware, you can’t configure the sleep time (value is fixed to 48 hours of inactivity). See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details. - space_secrets (
List[Dict[str, str]]
, optional) — A list of secret keys to set in your Space. Each item is in the form{"key": ..., "value": ..., "description": ...}
where description is optional. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets. - space_variables (
List[Dict[str, str]]
, optional) — A list of public environment variables to set in your Space. Each item is in the form{"key": ..., "value": ..., "description": ...}
where description is optional. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables.
Returns
URL to the newly created repo. Value is a subclass of str
containing
attributes like endpoint
, repo_type
and repo_id
.
Create an empty repo on the HuggingFace Hub.
create_tag
< source >( repo_id: str tag: str tag_message: Optional[str] = None revision: Optional[str] = None token: Optional[str] = None repo_type: Optional[str] = None exist_ok: bool = False )
Parameters
- repo_id (
str
) — The repository in which a commit will be tagged. Example:"user/my-cool-model"
. - tag (
str
) — The name of the tag to create. - tag_message (
str
, optional) — The description of the tag to create. - revision (
str
, optional) — The git revision to tag. It can be a branch name or the OID/SHA of a commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. Defaults to the head of the"main"
branch. - token (
str
, optional) — Authentication token. Will default to the stored token. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if tagging a dataset or space,None
or"model"
if tagging a model. Default isNone
. - exist_ok (
bool
, optional, defaults toFalse
) — IfTrue
, do not raise an error if tag already exists.
Raises
RepositoryNotFoundError or RevisionNotFoundError or HfHubHTTPError
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
- RevisionNotFoundError — If revision is not found (error 404) on the repo.
- HfHubHTTPError —
If the branch already exists on the repo (error 409) and
exist_ok
is set toFalse
.
Tag a given commit of a repo on the Hub.
dataset_info
< source >( repo_id: str revision: Optional[str] = None timeout: Optional[float] = None files_metadata: bool = False token: Optional[Union[bool, str]] = None ) → hf_api.DatasetInfo
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - revision (
str
, optional) — The revision of the dataset repository from which to get the information. - timeout (
float
, optional) — Whether to set a timeout for the request to the Hub. - files_metadata (
bool
, optional) — Whether or not to retrieve metadata for files in the repository (size, LFS metadata, etc). Defaults toFalse
. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
The dataset repository information.
Get info on one specific dataset on huggingface.co.
Dataset can be private if you pass an acceptable token.
Raises the following errors:
- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access. - RevisionNotFoundError If the revision to download from cannot be found.
delete_branch
< source >( repo_id: str branch: str token: Optional[str] = None repo_type: Optional[str] = None )
Parameters
- repo_id (
str
) — The repository in which a branch will be deleted. Example:"user/my-cool-model"
. - branch (
str
) — The name of the branch to delete. - token (
str
, optional) — Authentication token. Will default to the stored token. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if creating a branch on a dataset or space,None
or"model"
if tagging a model. Default isNone
.
Raises
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
- HfHubHTTPError —
If trying to delete a protected branch. Ex:
main
cannot be deleted. - HfHubHTTPError — If trying to delete a branch that does not exist.
Delete a branch from a repo on the Hub.
delete_collection
< source >( collection_slug: str missing_ok: bool = False token: Optional[str] = None )
Parameters
- collection_slug (
str
) — Slug of the collection to delete. Example:"TheBloke/recent-models-64f9a55bb3115b4f513ec026"
. - missing_ok (
bool
, optional) — IfTrue
, do not raise an error if collection doesn’t exists. - token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Delete a collection on the Hub.
Example:
>>> from huggingface_hub import delete_collection
>>> collection = delete_collection("username/useless-collection-64f9a55bb3115b4f513ec026", missing_ok=True)
This is a non-revertible action. A deleted collection cannot be restored.
delete_collection_item
< source >( collection_slug: str item_object_id: str missing_ok: bool = False token: Optional[str] = None )
Parameters
- collection_slug (
str
) — Slug of the collection to update. Example:"TheBloke/recent-models-64f9a55bb3115b4f513ec026"
. - item_object_id (
str
) — ID of the item in the collection. This is not the id of the item on the Hub (repo_id or paper id). It must be retrieved from a CollectionItem object. Example:collection.items[0]._id
. - missing_ok (
bool
, optional) — IfTrue
, do not raise an error if item doesn’t exists. - token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Delete an item from a collection.
Example:
>>> from huggingface_hub import get_collection, delete_collection_item
# Get collection first
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")
# Delete item based on its ID
>>> delete_collection_item(
... collection_slug="TheBloke/recent-models-64f9a55bb3115b4f513ec026",
... item_object_id=collection.items[-1].item_object_id,
... )
delete_file
< source >( path_in_repo: str repo_id: str token: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None commit_message: Optional[str] = None commit_description: Optional[str] = None create_pr: Optional[bool] = None parent_commit: Optional[str] = None )
Parameters
- path_in_repo (
str
) — Relative filepath in the repo, for example:"checkpoints/1fec34a/weights.bin"
- repo_id (
str
) — The repository from which the file will be deleted, for example:"username/custom_transformers"
- token (
str
, optional) — Authentication token, obtained withHfApi.login
method. Will default to the stored token. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if the file is in a dataset or space,None
or"model"
if in a model. Default isNone
. - revision (
str
, optional) — The git revision to commit from. Defaults to the head of the"main"
branch. - commit_message (
str
, optional) — The summary / title / first line of the generated commit. Defaults tof"Delete {path_in_repo} with huggingface_hub"
. - commit_description (
str
optional) — The description of the generated commit - create_pr (
boolean
, optional) — Whether or not to create a Pull Request with that commit. Defaults toFalse
. Ifrevision
is not set, PR is opened against the"main"
branch. Ifrevision
is set and is a branch, PR is opened against this branch. Ifrevision
is set and is not a branch name (example: a commit oid), anRevisionNotFoundError
is returned by the server. - parent_commit (
str
, optional) — The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. If specified andcreate_pr
isFalse
, the commit will fail ifrevision
does not point toparent_commit
. If specified andcreate_pr
isTrue
, the pull request will be created fromparent_commit
. Specifyingparent_commit
ensures the repo has not changed before committing the changes, and can be especially useful if the repo is updated / committed to concurrently.
Deletes a file in the given repo.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access. - RevisionNotFoundError If the revision to download from cannot be found.
- EntryNotFoundError If the file to download cannot be found.
delete_folder
< source >( path_in_repo: str repo_id: str token: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None commit_message: Optional[str] = None commit_description: Optional[str] = None create_pr: Optional[bool] = None parent_commit: Optional[str] = None )
Parameters
- path_in_repo (
str
) — Relative folder path in the repo, for example:"checkpoints/1fec34a"
. - repo_id (
str
) — The repository from which the folder will be deleted, for example:"username/custom_transformers"
- token (
str
, optional) — Authentication token, obtained withHfApi.login
method. Will default to the stored token. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if the folder is in a dataset or space,None
or"model"
if in a model. Default isNone
. - revision (
str
, optional) — The git revision to commit from. Defaults to the head of the"main"
branch. - commit_message (
str
, optional) — The summary / title / first line of the generated commit. Defaults tof"Delete folder {path_in_repo} with huggingface_hub"
. - commit_description (
str
optional) — The description of the generated commit. - create_pr (
boolean
, optional) — Whether or not to create a Pull Request with that commit. Defaults toFalse
. Ifrevision
is not set, PR is opened against the"main"
branch. Ifrevision
is set and is a branch, PR is opened against this branch. Ifrevision
is set and is not a branch name (example: a commit oid), anRevisionNotFoundError
is returned by the server. - parent_commit (
str
, optional) — The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. If specified andcreate_pr
isFalse
, the commit will fail ifrevision
does not point toparent_commit
. If specified andcreate_pr
isTrue
, the pull request will be created fromparent_commit
. Specifyingparent_commit
ensures the repo has not changed before committing the changes, and can be especially useful if the repo is updated / committed to concurrently.
Deletes a folder in the given repo.
Simple wrapper around create_commit() method.
delete_inference_endpoint
< source >( name: str namespace: Optional[str] = None token: Optional[str] = None )
Parameters
- name (
str
) — The name of the Inference Endpoint to delete. - namespace (
str
, optional) — The namespace in which the Inference Endpoint is located. Defaults to the current user. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token).
Delete an Inference Endpoint.
This operation is not reversible. If you don’t want to be charged for an Inference Endpoint, it is preferable to pause it with pause_inference_endpoint() or scale it to zero with scale_to_zero_inference_endpoint().
For convenience, you can also delete an Inference Endpoint using InferenceEndpoint.delete().
delete_repo
< source >( repo_id: str token: Optional[str] = None repo_type: Optional[str] = None missing_ok: bool = False )
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token) - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. - missing_ok (
bool
, optional, defaults toFalse
) — IfTrue
, do not raise an error if repo does not exist.
Raises
- — RepositoryNotFoundError
If the repository to delete from cannot be found and
missing_ok
is set to False (default).
- — RepositoryNotFoundError
If the repository to delete from cannot be found and
Delete a repo from the HuggingFace Hub. CAUTION: this is irreversible.
delete_space_secret
< source >( repo_id: str key: str token: Optional[str] = None )
Deletes a secret from a Space.
Secrets allow to set secret keys or tokens to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.
delete_space_storage
< source >( repo_id: str token: Optional[str] = None ) → SpaceRuntime
Parameters
- repo_id (
str
) — ID of the Space to update. Example:"HuggingFaceH4/open_llm_leaderboard"
. - token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Returns
Runtime information about a Space including Space stage and hardware.
Raises
BadRequestError
BadRequestError
— If space has no persistent storage.
Delete persistent storage for a Space.
delete_space_variable
< source >( repo_id: str key: str token: Optional[str] = None )
Deletes a variable from a Space.
Variables allow to set environment variables to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables
delete_tag
< source >( repo_id: str tag: str token: Optional[str] = None repo_type: Optional[str] = None )
Parameters
- repo_id (
str
) — The repository in which a tag will be deleted. Example:"user/my-cool-model"
. - tag (
str
) — The name of the tag to delete. - token (
str
, optional) — Authentication token. Will default to the stored token. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if tagging a dataset or space,None
or"model"
if tagging a model. Default isNone
.
Raises
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
- RevisionNotFoundError — If tag is not found.
Delete a tag from a repo on the Hub.
duplicate_space
< source >( from_id: str to_id: Optional[str] = None private: Optional[bool] = None token: Optional[str] = None exist_ok: bool = False hardware: Optional[SpaceHardware] = None storage: Optional[SpaceStorage] = None sleep_time: Optional[int] = None secrets: Optional[List[Dict[str, str]]] = None variables: Optional[List[Dict[str, str]]] = None ) → RepoUrl
Parameters
- from_id (
str
) — ID of the Space to duplicate. Example:"pharma/CLIP-Interrogator"
. - to_id (
str
, optional) — ID of the new Space. Example:"dog/CLIP-Interrogator"
. If not provided, the new Space will have the same name as the original Space, but in your account. - private (
bool
, optional) — Whether the new Space should be private or not. Defaults to the same privacy as the original Space. - token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided. - exist_ok (
bool
, optional, defaults toFalse
) — IfTrue
, do not raise an error if repo already exists. - hardware (
SpaceHardware
orstr
, optional) — Choice of Hardware. Example:"t4-medium"
. See SpaceHardware for a complete list. - storage (
SpaceStorage
orstr
, optional) — Choice of persistent storage tier. Example:"small"
. See SpaceStorage for a complete list. - sleep_time (
int
, optional) — Number of seconds of inactivity to wait before a Space is put to sleep. Set to-1
if you don’t want your Space to sleep (default behavior for upgraded hardware). For free hardware, you can’t configure the sleep time (value is fixed to 48 hours of inactivity). See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details. - secrets (
List[Dict[str, str]]
, optional) — A list of secret keys to set in your Space. Each item is in the form{"key": ..., "value": ..., "description": ...}
where description is optional. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets. - variables (
List[Dict[str, str]]
, optional) — A list of public environment variables to set in your Space. Each item is in the form{"key": ..., "value": ..., "description": ...}
where description is optional. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables.
Returns
URL to the newly created repo. Value is a subclass of str
containing
attributes like endpoint
, repo_type
and repo_id
.
Raises
- —
HTTPError
if the HuggingFace API returned an error
- —
- — RepositoryNotFoundError
If one of
from_id
orto_id
cannot be found. This may be because it doesn’t exist, or because it is set toprivate
and you do not have access.
- — RepositoryNotFoundError
If one of
Duplicate a Space.
Programmatically duplicate a Space. The new Space will be created in your account and will be in the same state as the original Space (running or paused). You can duplicate a Space no matter the current state of a Space.
Example:
>>> from huggingface_hub import duplicate_space
# Duplicate a Space to your account
>>> duplicate_space("multimodalart/dreambooth-training")
RepoUrl('https://huggingface.co/spaces/nateraw/dreambooth-training',...)
# Can set custom destination id and visibility flag.
>>> duplicate_space("multimodalart/dreambooth-training", to_id="my-dreambooth", private=True)
RepoUrl('https://huggingface.co/spaces/nateraw/my-dreambooth',...)
edit_discussion_comment
< source >( repo_id: str discussion_num: int comment_id: str new_content: str token: Optional[str] = None repo_type: Optional[str] = None ) → DiscussionComment
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - discussion_num (
int
) — The number of the Discussion or Pull Request . Must be a strictly positive integer. - comment_id (
str
) — The ID of the comment to edit. - new_content (
str
) — The new content of the comment. Comments support markdown formatting. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token)
Returns
the edited comment
Edits a comment on a Discussion / Pull Request.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
file_exists
< source >( repo_id: str filename: str repo_type: Optional[str] = None revision: Optional[str] = None token: Optional[str] = None )
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - filename (
str
) — The name of the file to check, for example:"config.json"
- repo_type (
str
, optional) — Set to"dataset"
or"space"
if getting repository info from a dataset or a space,None
or"model"
if getting repository info from a model. Default isNone
. - revision (
str
, optional) — The revision of the repository from which to get the information. Defaults to"main"
branch. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Checks if a file exists in a repository on the Hugging Face Hub.
get_collection
< source >( collection_slug: str token: Optional[str] = None )
Gets information about a Collection on the Hub.
Returns: Collection
Example:
>>> from huggingface_hub import get_collection
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")
>>> collection.title
'Recent models'
>>> len(collection.items)
37
>>> collection.items[0]
CollectionItem(
item_object_id='651446103cd773a050bf64c2',
item_id='TheBloke/U-Amethyst-20B-AWQ',
item_type='model',
position=88,
note=None
)
List all valid dataset tags as a nested namespace object.
get_discussion_details
< source >( repo_id: str discussion_num: int repo_type: Optional[str] = None token: Optional[str] = None )
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - discussion_num (
int
) — The number of the Discussion or Pull Request . Must be a strictly positive integer. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token)
Fetches a Discussion’s / Pull Request ‘s details from the Hub.
Returns: DiscussionWithDetails
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
get_full_repo_name
< source >( model_id: str organization: Optional[str] = None token: Optional[Union[bool, str]] = None ) → str
Parameters
- model_id (
str
) — The name of the model. - organization (
str
, optional) — If passed, the repository name will be in the organization namespace instead of the user namespace. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
str
The repository name in the user’s namespace ({username}/{model_id}) if no organization is passed, and under the organization namespace ({organization}/{model_id}) otherwise.
Returns the repository name for a given model ID and optional organization.
get_hf_file_metadata
< source >( url: str token: Union[bool, str, None] = None proxies: Optional[Dict] = None timeout: Optional[float] = 10 )
Parameters
- url (
str
) — File url, for example returned by hf_hub_url(). - token (
str
orbool
, optional) — A token to be used for the download.- If
True
, the token is read from the HuggingFace config folder. - If
False
orNone
, no token is provided. - If a string, it’s used as the authentication token.
- If
- proxies (
dict
, optional) — Dictionary mapping protocol to the URL of the proxy passed torequests.request
. - timeout (
float
, optional, defaults to 10) — How many seconds to wait for the server to send metadata before giving up.
Fetch metadata of a file versioned on the Hub for a given url.
get_inference_endpoint
< source >( name: str namespace: Optional[str] = None token: Optional[str] = None ) → InferenceEndpoint
Parameters
- name (
str
) — The name of the Inference Endpoint to retrieve information about. - namespace (
str
, optional) — The namespace in which the Inference Endpoint is located. Defaults to the current user. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token).
Returns
information about the requested Inference Endpoint.
Get information about an Inference Endpoint.
Example:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> endpoint = api.get_inference_endpoint("my-text-to-image")
>>> endpoint
InferenceEndpoint(name='my-text-to-image', ...)
# Get status
>>> endpoint.status
'running'
>>> endpoint.url
'https://my-text-to-image.region.vendor.endpoints.huggingface.cloud'
# Run inference
>>> endpoint.client.text_to_image(...)
List all valid model tags as a nested namespace object
get_paths_info
< source >( repo_id: str paths: Union[List[str], str] expand: bool = False revision: Optional[str] = None repo_type: Optional[str] = None token: Optional[Union[bool, str]] = None ) → List[Union[RepoFile, RepoFolder]]
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - paths (
Union[List[str], str]
, optional) — The paths to get information about. If a path do not exist, it is ignored without raising an exception. - expand (
bool
, optional, defaults toFalse
) — Whether to fetch more information about the paths (e.g. last commit and files’ security scan results). This operation is more expensive for the server so only 50 results are returned per page (instead of 1000). As pagination is implemented inhuggingface_hub
, this is transparent for you except for the time it takes to get the results. - revision (
str
, optional) — The revision of the repository from which to get the information. Defaults to"main"
branch. - repo_type (
str
, optional) — The type of the repository from which to get the information ("model"
,"dataset"
or"space"
. Defaults to"model"
. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
List[Union[RepoFile, RepoFolder]]
The information about the paths, as a list of RepoFile
and RepoFolder
objects.
Raises
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
- RevisionNotFoundError — If revision is not found (error 404) on the repo.
Get information about a repo’s paths.
Example:
>>> from huggingface_hub import get_paths_info
>>> paths_info = get_paths_info("allenai/c4", ["README.md", "en"], repo_type="dataset")
>>> paths_info
[
RepoFile(path='README.md', size=2379, blob_id='f84cb4c97182890fc1dbdeaf1a6a468fd27b4fff', lfs=None, last_commit=None, security=None),
RepoFolder(path='en', tree_id='dc943c4c40f53d02b31ced1defa7e5f438d5862e', last_commit=None)
]
get_repo_discussions
< source >( repo_id: str author: Optional[str] = None discussion_type: Optional[DiscussionTypeFilter] = None discussion_status: Optional[DiscussionStatusFilter] = None repo_type: Optional[str] = None token: Optional[str] = None ) → Iterator[Discussion]
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - author (
str
, optional) — Pass a value to filter by discussion author.None
means no filter. Default isNone
. - discussion_type (
str
, optional) — Set to"pull_request"
to fetch only pull requests,"discussion"
to fetch only discussions. Set to"all"
orNone
to fetch both. Default isNone
. - discussion_status (
str
, optional) — Set to"open"
(respectively"closed"
) to fetch only open (respectively closed) discussions. Set to"all"
orNone
to fetch both. Default isNone
. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if fetching from a dataset or space,None
or"model"
if fetching from a model. Default isNone
. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token).
Returns
Iterator[Discussion]
An iterator of Discussion objects.
Fetches Discussions and Pull Requests for the given repo.
Example:
get_safetensors_metadata
< source >( repo_id: str repo_type: Optional[str] = None revision: Optional[str] = None token: Optional[str] = None ) → SafetensorsRepoMetadata
Parameters
- repo_id (
str
) — A user or an organization name and a repo name separated by a/
. - filename (
str
) — The name of the file in the repo. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if the file is in a dataset or space,None
or"model"
if in a model. Default isNone
. - revision (
str
, optional) — The git revision to fetch the file from. Can be a branch name, a tag, or a commit hash. Defaults to the head of the"main"
branch. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
SafetensorsRepoMetadata
information related to safetensors repo.
Raises
- —
NotASafetensorsRepoError
: if the repo is not a safetensors repo i.e. doesn’t have either amodel.safetensors
or amodel.safetensors.index.json
file.
- —
- —
SafetensorsParsingError
: if a safetensors file header couldn’t be parsed correctly.
- —
Parse metadata for a safetensors repo on the Hub.
We first check if the repo has a single safetensors file or a sharded safetensors repo. If it’s a single safetensors file, we parse the metadata from this file. If it’s a sharded safetensors repo, we parse the metadata from the index file and then parse the metadata from each shard.
To parse metadata from a single safetensors file, use get_safetensors_metadata().
For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.
Example:
# Parse repo with single weights file
>>> metadata = get_safetensors_metadata("bigscience/bloomz-560m")
>>> metadata
SafetensorsRepoMetadata(
metadata=None,
sharded=False,
weight_map={'h.0.input_layernorm.bias': 'model.safetensors', ...},
files_metadata={'model.safetensors': SafetensorsFileMetadata(...)}
)
>>> metadata.files_metadata["model.safetensors"].metadata
{'format': 'pt'}
# Parse repo with sharded model
>>> metadata = get_safetensors_metadata("bigscience/bloom")
Parse safetensors files: 100%|██████████████████████████████████████████| 72/72 [00:12<00:00, 5.78it/s]
>>> metadata
SafetensorsRepoMetadata(metadata={'total_size': 352494542848}, sharded=True, weight_map={...}, files_metadata={...})
>>> len(metadata.files_metadata)
72 # All safetensors files have been fetched
# Parse repo with sharded model
>>> get_safetensors_metadata("runwayml/stable-diffusion-v1-5")
NotASafetensorsRepoError: 'runwayml/stable-diffusion-v1-5' is not a safetensors repo. Couldn't find 'model.safetensors.index.json' or 'model.safetensors' files.
get_space_runtime
< source >( repo_id: str token: Optional[str] = None ) → SpaceRuntime
Gets runtime information about a Space.
get_space_variables
< source >( repo_id: str token: Optional[str] = None )
Gets all variables from a Space.
Variables allow to set environment variables to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables
get_token_permission
< source >( token: Optional[str] = None ) → Literal["read", "write", None]
Check if a given token
is valid and return its permissions.
For more details about tokens, please refer to https://huggingface.co/docs/hub/security-tokens#what-are-user-access-tokens.
grant_access
< source >( repo_id: str user: str repo_type: Optional[str] = None token: Optional[str] = None )
Parameters
- repo_id (
str
) — The id of the repo to grant access to. - user (
str
) — The username of the user to grant access. - repo_type (
str
, optional) — The type of the repo to grant access to. Must be one ofmodel
,dataset
orspace
. Defaults tomodel
. - token (
str
, optional) — A valid authentication token (see https://huggingface.co/settings/token).
Raises
HTTPError
HTTPError
— HTTP 400 if the repo is not gated.HTTPError
— HTTP 400 if the user already has access to the repo.HTTPError
— HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t havewrite
oradmin
role in the organization the repo belongs to or if you passed aread
token.HTTPError
— HTTP 404 if the user does not exist on the Hub.
Grant access to a user for a given gated repo.
Granting access don’t require for the user to send an access request by themselves. The user is automatically added to the accepted list meaning they can download the files You can revoke the granted access at any time using cancel_access_request() or reject_access_request().
For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.
hf_hub_download
< source >( repo_id: str filename: str subfolder: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None cache_dir: Union[str, Path, None] = None local_dir: Union[str, Path, None] = None local_dir_use_symlinks: Union[bool, Literal['auto']] = 'auto' force_download: bool = False force_filename: Optional[str] = None proxies: Optional[Dict] = None etag_timeout: float = 10 resume_download: bool = False token: Optional[Union[str, bool]] = None local_files_only: bool = False legacy_cache_layout: bool = False )
Parameters
- repo_id (
str
) — A user or an organization name and a repo name separated by a/
. - filename (
str
) — The name of the file in the repo. - subfolder (
str
, optional) — An optional value corresponding to a folder inside the model repo. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if downloading from a dataset or space,None
or"model"
if downloading from a model. Default isNone
. - revision (
str
, optional) — An optional Git revision id which can be a branch name, a tag, or a commit hash. - cache_dir (
str
,Path
, optional) — Path to the folder where cached files are stored. - local_dir (
str
orPath
, optional) — If provided, the downloaded file will be placed under this directory, either as a symlink (default) or a regular file (see description for more details). - local_dir_use_symlinks (
"auto"
orbool
, defaults to"auto"
) — To be used withlocal_dir
. If set to “auto”, the cache directory will be used and the file will be either duplicated or symlinked to the local directory depending on its size. It set toTrue
, a symlink will be created, no matter the file size. If set toFalse
, the file will either be duplicated from cache (if already exists) or downloaded from the Hub and not cached. See description for more details. - force_download (
bool
, optional, defaults toFalse
) — Whether the file should be downloaded even if it already exists in the local cache. - proxies (
dict
, optional) — Dictionary mapping protocol to the URL of the proxy passed torequests.request
. - etag_timeout (
float
, optional, defaults to10
) — When fetching ETag, how many seconds to wait for the server to send data before giving up which is passed torequests.request
. - resume_download (
bool
, optional, defaults toFalse
) — IfTrue
, resume a previously interrupted download. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header. - local_files_only (
bool
, optional, defaults toFalse
) — IfTrue
, avoid downloading the file and return the path to the local cached file if it exists. - legacy_cache_layout (
bool
, optional, defaults toFalse
) — IfTrue
, uses the legacy file cache layout i.e. just call hf_hub_url() thencached_download
. This is deprecated as the new cache layout is more powerful.
Download a given file if it’s not already present in the local cache.
The new cache file layout looks like this:
- The cache directory contains one subfolder per repo_id (namespaced by repo type)
- inside each repo folder:
- refs is a list of the latest known revision => commit_hash pairs
- blobs contains the actual file blobs (identified by their git-sha or sha256, depending on whether they’re LFS files or not)
- snapshots contains one subfolder per commit, each “commit” contains the subset of the files that have been resolved at that particular commit. Each filename is a symlink to the blob at that particular commit.
If local_dir
is provided, the file structure from the repo will be replicated in this location. You can configure
how you want to move those files:
- If
local_dir_use_symlinks="auto"
(default), files are downloaded and stored in the cache directory as blob files. Small files (<5MB) are duplicated inlocal_dir
while a symlink is created for bigger files. The goal is to be able to manually edit and save small files without corrupting the cache while saving disk space for binary files. The 5MB threshold can be configured with theHF_HUB_LOCAL_DIR_AUTO_SYMLINK_THRESHOLD
environment variable. - If
local_dir_use_symlinks=True
, files are downloaded, stored in the cache directory and symlinked inlocal_dir
. This is optimal in term of disk usage but files must not be manually edited. - If
local_dir_use_symlinks=False
and the blob files exist in the cache directory, they are duplicated in the local dir. This means disk usage is not optimized. - Finally, if
local_dir_use_symlinks=False
and the blob files do not exist in the cache directory, then the files are downloaded and directly placed underlocal_dir
. This means if you need to download them again later, they will be re-downloaded entirely.
[ 96] .
└── [ 160] models--julien-c--EsperBERTo-small
├── [ 160] blobs
│ ├── [321M] 403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
│ ├── [ 398] 7cb18dc9bafbfcf74629a4b760af1b160957a83e
│ └── [1.4K] d7edf6bd2a681fb0175f7735299831ee1b22b812
├── [ 96] refs
│ └── [ 40] main
└── [ 128] snapshots
├── [ 128] 2439f60ef33a0d46d85da5001d52aeda5b00ce9f
│ ├── [ 52] README.md -> ../../blobs/d7edf6bd2a681fb0175f7735299831ee1b22b812
│ └── [ 76] pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
└── [ 128] bbc77c8132af1cc5cf678da3f1ddf2de43606d48
├── [ 52] README.md -> ../../blobs/7cb18dc9bafbfcf74629a4b760af1b160957a83e
└── [ 76] pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
Raises the following errors:
EnvironmentError
iftoken=True
and the token cannot be found.OSError
if ETag cannot be determined.ValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access. - RevisionNotFoundError If the revision to download from cannot be found.
- EntryNotFoundError If the file to download cannot be found.
- LocalEntryNotFoundError If network is disabled or unavailable and file is not found in cache.
hide_discussion_comment
< source >( repo_id: str discussion_num: int comment_id: str token: Optional[str] = None repo_type: Optional[str] = None ) → DiscussionComment
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - discussion_num (
int
) — The number of the Discussion or Pull Request . Must be a strictly positive integer. - comment_id (
str
) — The ID of the comment to edit. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token)
Returns
the hidden comment
Hides a comment on a Discussion / Pull Request.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
like
< source >( repo_id: str token: Optional[str] = None repo_type: Optional[str] = None )
Parameters
- repo_id (
str
) — The repository to like. Example:"user/my-cool-model"
. - token (
str
, optional) — Authentication token. Will default to the stored token. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if liking a dataset or space,None
or"model"
if liking a model. Default isNone
.
Raises
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
Like a given repo on the Hub (e.g. set as favorite).
See also unlike() and list_liked_repos().
list_accepted_access_requests
< source >( repo_id: str repo_type: Optional[str] = None token: Optional[str] = None ) → List[AccessRequest]
Parameters
- repo_id (
str
) — The id of the repo to get access requests for. - repo_type (
str
, optional) — The type of the repo to get access requests for. Must be one ofmodel
,dataset
orspace
. Defaults tomodel
. - token (
str
, optional) — A valid authentication token (see https://huggingface.co/settings/token).
Returns
List[AccessRequest]
A list of AccessRequest
objects. Each time contains a username
, email
,
status
and timestamp
attribute. If the gated repo has a custom form, the fields
attribute will
be populated with user’s answers.
Raises
HTTPError
HTTPError
— HTTP 400 if the repo is not gated.HTTPError
— HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t havewrite
oradmin
role in the organization the repo belongs to or if you passed aread
token.
Get accepted access requests for a given gated repo.
An accepted request means the user has requested access to the repo and the request has been accepted. The user can download any file of the repo. If the approval mode is automatic, this list should contains by default all requests. Accepted requests can be cancelled or rejected at any time using cancel_access_request() and reject_access_request(). A cancelled request will go back to the pending list while a rejected request will go to the rejected list. In both cases, the user will lose access to the repo.
For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.
Example:
>>> from huggingface_hub import list_accepted_access_requests
>>> requests = list_accepted_access_requests("meta-llama/Llama-2-7b")
>>> len(requests)
411
>>> requests[0]
[
AccessRequest(
username='clem',
fullname='Clem 🤗',
email='***',
timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
status='accepted',
fields=None,
),
...
]
list_collections
< source >( owner: Union[List[str], str, None] = None item: Union[List[str], str, None] = None sort: Optional[Literal[('lastModified', 'trending', 'upvotes')]] = None limit: Optional[int] = None token: Optional[Union[bool, str]] = None ) → Iterable[Collection]
Parameters
- owner (
List[str]
orstr
, optional) — Filter by owner’s username. - item (
List[str]
orstr
, optional) — Filter collections containing a particular items. Example:"models/teknium/OpenHermes-2.5-Mistral-7B"
,"datasets/squad"
or"papers/2311.12983"
. - sort (
Literal["lastModified", "trending", "upvotes"]
, optional) — Sort collections by last modified, trending or upvotes. - limit (
int
, optional) — Maximum number of collections to be returned. - token (
bool
orstr
, optional) — An authentication token (see https://huggingface.co/settings/token).
Returns
Iterable[Collection]
an iterable of Collection objects.
List collections on the Huggingface Hub, given some filters.
When listing collections, the item list per collection is truncated to 4 items maximum. To retrieve all items from a collection, you must use get_collection().
list_datasets
< source >( filter: Union[DatasetFilter, str, Iterable[str], None] = None author: Optional[str] = None search: Optional[str] = None sort: Union[Literal['last_modified'], str, None] = None direction: Optional[Literal[-1]] = None limit: Optional[int] = None full: Optional[bool] = None token: Optional[str] = None ) → Iterable[DatasetInfo]
Parameters
- filter (DatasetFilter or
str
orIterable
, optional) — A string or DatasetFilter which can be used to identify datasets on the hub. - author (
str
, optional) — A string which identify the author of the returned datasets. - search (
str
, optional) — A string that will be contained in the returned datasets. - sort (
Literal["last_modified"]
orstr
, optional) — The key with which to sort the resulting datasets. Possible values are the properties of the huggingface_hub.hf_api.DatasetInfo class. - direction (
Literal[-1]
orint
, optional) — Direction in which to sort. The value-1
sorts by descending order while all other values sort by ascending order. - limit (
int
, optional) — The limit on the number of datasets fetched. Leaving this option toNone
fetches all datasets. - full (
bool
, optional) — Whether to fetch all dataset data, including thelast_modified
, thecard_data
and the files. Can contain useful information such as the PapersWithCode ID. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
Iterable[DatasetInfo]
an iterable of huggingface_hub.hf_api.DatasetInfo objects.
List datasets hosted on the Huggingface Hub, given some filters.
Example usage with the filter
argument:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> # List all datasets
>>> api.list_datasets()
>>> # List only the text classification datasets
>>> api.list_datasets(filter="task_categories:text-classification")
>>> # Using the `DatasetFilter`
>>> filt = DatasetFilter(task_categories="text-classification")
>>> # List only the datasets in russian for language modeling
>>> api.list_datasets(
... filter=("language:ru", "task_ids:language-modeling")
... )
>>> # Using the `DatasetFilter`
>>> filt = DatasetFilter(language="ru", task_ids="language-modeling")
>>> api.list_datasets(filter=filt)
Example usage with the search
argument:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> # List all datasets with "text" in their name
>>> api.list_datasets(search="text")
>>> # List all datasets with "text" in their name made by google
>>> api.list_datasets(search="text", author="google")
list_files_info
< source >( repo_id: str paths: Union[List[str], str, None] = None expand: bool = False revision: Optional[str] = None repo_type: Optional[str] = None token: Optional[Union[bool, str]] = None ) → Iterable[RepoFile]
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - paths (
Union[List[str], str, None]
, optional) — The paths to get information about. Paths to files are directly resolved. Paths to folders are resolved recursively which means that information is returned about all files in the folder and its subfolders. IfNone
, all files are returned (the default). If a path do not exist, it is ignored without raising an exception. - expand (
bool
, optional, defaults toFalse
) — Whether to fetch more information about the files (e.g. last commit and security scan results). This operation is more expensive for the server so only 50 results are returned per page (instead of 1000). As pagination is implemented inhuggingface_hub
, this is transparent for you except for the time it takes to get the results. - revision (
str
, optional) — The revision of the repository from which to get the information. Defaults to"main"
branch. - repo_type (
str
, optional) — The type of the repository from which to get the information ("model"
,"dataset"
or"space"
. Defaults to"model"
. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
Iterable[RepoFile]
The information about the files, as an iterable of RepoFile
objects. The order of the files is
not guaranteed.
Raises
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
- RevisionNotFoundError — If revision is not found (error 404) on the repo.
List files on a repo and get information about them.
Takes as input a list of paths. Those paths can be either files or folders. Two server endpoints are called:
- POST “/paths-info” to get information about the provided paths. Called once.
- GET “/tree?recursive=True” to paginate over the input folders. Called only if a folder path is provided as input. Will be called multiple times to follow pagination. If no path is provided as input, step 1. is ignored and all files from the repo are listed.
Examples:
Get information about files on a repo.
>>> from huggingface_hub import list_files_info
>>> files_info = list_files_info("lysandre/arxiv-nlp", ["README.md", "config.json"])
>>> files_info
<generator object HfApi.list_files_info at 0x7f93b848e730>
>>> list(files_info)
[
RepoFile(path='README.md', size=391, blob_id='43bd404b159de6fba7c2f4d3264347668d43af25', lfs=None, last_commit=None, security=None),
RepoFile(path='config.json', size=554, blob_id='2f9618c3a19b9a61add74f70bfb121335aeef666', lfs=None, last_commit=None, security=None)
]
Get even more information about files on a repo (last commit and security scan results)
>>> from huggingface_hub import list_files_info
>>> files_info = list_files_info("prompthero/openjourney-v4", expand=True)
>>> list(files_info)
[
RepoFile(
path='safety_checker/pytorch_model.bin',
size=1216064769,
blob_id='c8835557a0d3af583cb06c7c154b7e54a069c41d',
lfs={
'size': 1216064769,
'sha256': '16d28f2b37109f222cdc33620fdd262102ac32112be0352a7f77e9614b35a394',
'pointer_size': 135
},
last_commit={
'oid': '47b62b20b20e06b9de610e840282b7e6c3d51190',
'title': 'Upload diffusers weights (#48)',
'date': datetime.datetime(2023, 3, 21, 10, 5, 27, tzinfo=datetime.timezone.utc)
},
security={
'safe': True,
'av_scan': {
'virusFound': False,
'virusNames': None
},
'pickle_import_scan': {
'highestSafetyLevel': 'innocuous',
'imports': [
{'module': 'torch', 'name': 'FloatStorage', 'safety': 'innocuous'},
{'module': 'collections', 'name': 'OrderedDict', 'safety': 'innocuous'},
{'module': 'torch', 'name': 'LongStorage', 'safety': 'innocuous'},
{'module': 'torch._utils', 'name': '_rebuild_tensor_v2', 'safety': 'innocuous'}
]
}
}
),
RepoFile(
path='scheduler/scheduler_config.json',
size=465,
blob_id='70d55e3e9679f41cbc66222831b38d5abce683dd',
lfs=None,
last_commit={
'oid': '47b62b20b20e06b9de610e840282b7e6c3d51190',
'title': 'Upload diffusers weights (#48)',
'date': datetime.datetime(2023, 3, 21, 10, 5, 27, tzinfo=datetime.timezone.utc)},
security={
'safe': True,
'av_scan': {
'virusFound': False,
'virusNames': None
},
'pickle_import_scan': None
}
),
...
]
List LFS files from the “vae/” folder in “stabilityai/stable-diffusion-2” repository.
>>> from huggingface_hub import list_files_info
>>> [info.path for info in list_files_info("stabilityai/stable-diffusion-2", "vae") if info.lfs is not None]
['vae/diffusion_pytorch_model.bin', 'vae/diffusion_pytorch_model.safetensors']
List all files on a repo.
list_inference_endpoints
< source >( namespace: Optional[str] = None token: Optional[str] = None ) → ListInferenceEndpoint
Parameters
- namespace (
str
, optional) — The namespace to list endpoints for. Defaults to the current user. Set to"*"
to list all endpoints from all namespaces (i.e. personal namespace and all orgs the user belongs to). - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token).
Returns
A list of all inference endpoints for the given namespace.
Lists all inference endpoints for the given namespace.
list_liked_repos
< source >( user: Optional[str] = None token: Optional[str] = None ) → UserLikes
Parameters
- user (
str
, optional) — Name of the user for which you want to fetch the likes. - token (
str
, optional) — A valid authentication token (see https://huggingface.co/settings/token). Used only ifuser
is not passed to implicitly determine the current user name.
Returns
object containing the user name and 3 lists of repo ids (1 for models, 1 for datasets and 1 for Spaces).
Raises
ValueError
ValueError
— Ifuser
is not passed and no token found (either from argument or from machine).
List all public repos liked by a user on huggingface.co.
This list is public so token is optional. If user
is not passed, it defaults to
the logged in user.
list_metrics
< source >( ) → List[MetricInfo]
Returns
List[MetricInfo]
a list of MetricInfo
objects which.
Get the public list of all the metrics on huggingface.co
list_models
< source >( filter: Union[ModelFilter, str, Iterable[str], None] = None author: Optional[str] = None search: Optional[str] = None emissions_thresholds: Optional[Tuple[float, float]] = None sort: Union[Literal['last_modified'], str, None] = None direction: Optional[Literal[-1]] = None limit: Optional[int] = None full: Optional[bool] = None cardData: bool = False fetch_config: bool = False token: Optional[Union[bool, str]] = None ) → Iterable[ModelInfo]
Parameters
- filter (ModelFilter or
str
orIterable
, optional) — A string or ModelFilter which can be used to identify models on the Hub. - author (
str
, optional) — A string which identify the author (user or organization) of the returned models - search (
str
, optional) — A string that will be contained in the returned model ids. - emissions_thresholds (
Tuple
, optional) — A tuple of two ints or floats representing a minimum and maximum carbon footprint to filter the resulting models with in grams. - sort (
Literal["last_modified"]
orstr
, optional) — The key with which to sort the resulting models. Possible values are the properties of the huggingface_hub.hf_api.ModelInfo class. - direction (
Literal[-1]
orint
, optional) — Direction in which to sort. The value-1
sorts by descending order while all other values sort by ascending order. - limit (
int
, optional) — The limit on the number of models fetched. Leaving this option toNone
fetches all models. - full (
bool
, optional) — Whether to fetch all model data, including thelast_modified
, thesha
, the files and thetags
. This is set toTrue
by default when using a filter. - cardData (
bool
, optional) — Whether to grab the metadata for the model as well. Can contain useful information such as carbon emissions, metrics, and datasets trained on. - fetch_config (
bool
, optional) — Whether to fetch the model configs as well. This is not included infull
due to its size. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
Iterable[ModelInfo]
an iterable of huggingface_hub.hf_api.ModelInfo objects.
List models hosted on the Huggingface Hub, given some filters.
Example usage with the filter
argument:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> # List all models
>>> api.list_models()
>>> # List only the text classification models
>>> api.list_models(filter="text-classification")
>>> # Using the `ModelFilter`
>>> filt = ModelFilter(task="text-classification")
>>> # List only models from the AllenNLP library
>>> api.list_models(filter="allennlp")
list_pending_access_requests
< source >( repo_id: str repo_type: Optional[str] = None token: Optional[str] = None ) → List[AccessRequest]
Parameters
- repo_id (
str
) — The id of the repo to get access requests for. - repo_type (
str
, optional) — The type of the repo to get access requests for. Must be one ofmodel
,dataset
orspace
. Defaults tomodel
. - token (
str
, optional) — A valid authentication token (see https://huggingface.co/settings/token).
Returns
List[AccessRequest]
A list of AccessRequest
objects. Each time contains a username
, email
,
status
and timestamp
attribute. If the gated repo has a custom form, the fields
attribute will
be populated with user’s answers.
Raises
HTTPError
HTTPError
— HTTP 400 if the repo is not gated.HTTPError
— HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t havewrite
oradmin
role in the organization the repo belongs to or if you passed aread
token.
Get pending access requests for a given gated repo.
A pending request means the user has requested access to the repo but the request has not been processed yet. If the approval mode is automatic, this list should be empty. Pending requests can be accepted or rejected using accept_access_request() and reject_access_request().
For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.
Example:
>>> from huggingface_hub import list_pending_access_requests, accept_access_request
# List pending requests
>>> requests = list_pending_access_requests("meta-llama/Llama-2-7b")
>>> len(requests)
411
>>> requests[0]
[
AccessRequest(
username='clem',
fullname='Clem 🤗',
email='***',
timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
status='pending',
fields=None,
),
...
]
# Accept Clem's request
>>> accept_access_request("meta-llama/Llama-2-7b", "clem")
list_rejected_access_requests
< source >( repo_id: str repo_type: Optional[str] = None token: Optional[str] = None ) → List[AccessRequest]
Parameters
- repo_id (
str
) — The id of the repo to get access requests for. - repo_type (
str
, optional) — The type of the repo to get access requests for. Must be one ofmodel
,dataset
orspace
. Defaults tomodel
. - token (
str
, optional) — A valid authentication token (see https://huggingface.co/settings/token).
Returns
List[AccessRequest]
A list of AccessRequest
objects. Each time contains a username
, email
,
status
and timestamp
attribute. If the gated repo has a custom form, the fields
attribute will
be populated with user’s answers.
Raises
HTTPError
HTTPError
— HTTP 400 if the repo is not gated.HTTPError
— HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t havewrite
oradmin
role in the organization the repo belongs to or if you passed aread
token.
Get rejected access requests for a given gated repo.
A rejected request means the user has requested access to the repo and the request has been explicitly rejected by a repo owner (either you or another user from your organization). The user cannot download any file of the repo. Rejected requests can be accepted or cancelled at any time using accept_access_request() and cancel_access_request(). A cancelled request will go back to the pending list while an accepted request will go to the accepted list.
For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.
Example:
>>> from huggingface_hub import list_rejected_access_requests
>>> requests = list_rejected_access_requests("meta-llama/Llama-2-7b")
>>> len(requests)
411
>>> requests[0]
[
AccessRequest(
username='clem',
fullname='Clem 🤗',
email='***',
timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
status='rejected',
fields=None,
),
...
]
list_repo_commits
< source >( repo_id: str repo_type: Optional[str] = None token: Optional[Union[bool, str]] = None revision: Optional[str] = None formatted: bool = False ) → List[GitCommitInfo]
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if listing commits from a dataset or a Space,None
or"model"
if listing from a model. Default isNone
. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header. - revision (
str
, optional) — The git revision to commit from. Defaults to the head of the"main"
branch. - formatted (
bool
) — Whether to return the HTML-formatted title and description of the commits. Defaults to False.
Returns
List[GitCommitInfo]
list of objects containing information about the commits for a repo on the Hub.
Raises
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
- RevisionNotFoundError — If revision is not found (error 404) on the repo.
Get the list of commits of a given revision for a repo on the Hub.
Commits are sorted by date (last commit first).
Example:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
# Commits are sorted by date (last commit first)
>>> initial_commit = api.list_repo_commits("gpt2")[-1]
# Initial commit is always a system commit containing the `.gitattributes` file.
>>> initial_commit
GitCommitInfo(
commit_id='9b865efde13a30c13e0a33e536cf3e4a5a9d71d8',
authors=['system'],
created_at=datetime.datetime(2019, 2, 18, 10, 36, 15, tzinfo=datetime.timezone.utc),
title='initial commit',
message='',
formatted_title=None,
formatted_message=None
)
# Create an empty branch by deriving from initial commit
>>> api.create_branch("gpt2", "new_empty_branch", revision=initial_commit.commit_id)
list_repo_files
< source >( repo_id: str revision: Optional[str] = None repo_type: Optional[str] = None token: Optional[Union[bool, str]] = None ) → List[str]
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - revision (
str
, optional) — The revision of the model repository from which to get the information. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
List[str]
the list of files in a given repository.
Get the list of files in a given repo.
list_repo_likers
< source >( repo_id: str repo_type: Optional[str] = None token: Optional[str] = None ) → List[User]
Parameters
- repo_id (
str
) — The repository to retrieve . Example:"user/my-cool-model"
. - token (
str
, optional) — Authentication token. Will default to the stored token. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
.
Returns
List[User]
a list of User objects.
List all users who liked a given repo on the hugging Face Hub.
See also like() and list_liked_repos().
list_repo_refs
< source >( repo_id: str repo_type: Optional[str] = None include_pull_requests: bool = False token: Optional[Union[bool, str]] = None ) → GitRefs
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if listing refs from a dataset or a Space,None
or"model"
if listing from a model. Default isNone
. - include_pull_requests (
bool
, optional) — Whether to include refs from pull requests in the list. Defaults toFalse
. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
object containing all information about branches and tags for a repo on the Hub.
Get the list of refs of a given repo (both tags and branches).
Example:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.list_repo_refs("gpt2")
GitRefs(branches=[GitRefInfo(name='main', ref='refs/heads/main', target_commit='e7da7f221d5bf496a48136c0cd264e630fe9fcc8')], converts=[], tags=[])
>>> api.list_repo_refs("bigcode/the-stack", repo_type='dataset')
GitRefs(
branches=[
GitRefInfo(name='main', ref='refs/heads/main', target_commit='18edc1591d9ce72aa82f56c4431b3c969b210ae3'),
GitRefInfo(name='v1.1.a1', ref='refs/heads/v1.1.a1', target_commit='f9826b862d1567f3822d3d25649b0d6d22ace714')
],
converts=[],
tags=[
GitRefInfo(name='v1.0', ref='refs/tags/v1.0', target_commit='c37a8cd1e382064d8aced5e05543c5f7753834da')
]
)
list_repo_tree
< source >( repo_id: str path_in_repo: Optional[str] = None recursive: bool = False expand: bool = False revision: Optional[str] = None repo_type: Optional[str] = None token: Optional[Union[bool, str]] = None ) → Iterable[Union[RepoFile, RepoFolder]]
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - path_in_repo (
str
, optional) — Relative path of the tree (folder) in the repo, for example:"checkpoints/1fec34a/results"
. Will default to the root tree (folder) of the repository. - recursive (
bool
, optional, defaults toFalse
) — Whether to list tree’s files and folders recursively. - expand (
bool
, optional, defaults toFalse
) — Whether to fetch more information about the tree’s files and folders (e.g. last commit and files’ security scan results). This operation is more expensive for the server so only 50 results are returned per page (instead of 1000). As pagination is implemented inhuggingface_hub
, this is transparent for you except for the time it takes to get the results. - revision (
str
, optional) — The revision of the repository from which to get the tree. Defaults to"main"
branch. - repo_type (
str
, optional) — The type of the repository from which to get the tree ("model"
,"dataset"
or"space"
. Defaults to"model"
. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
Iterable[Union[RepoFile, RepoFolder]]
The information about the tree’s files and folders, as an iterable of RepoFile
and RepoFolder
objects. The order of the files and folders is
not guaranteed.
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
- RevisionNotFoundError — If revision is not found (error 404) on the repo.
- EntryNotFoundError — If the tree (folder) does not exist (error 404) on the repo.
List a repo tree’s files and folders and get information about them.
Examples:
Get information about a repo’s tree.
>>> from huggingface_hub import list_repo_tree
>>> repo_tree = list_repo_tree("lysandre/arxiv-nlp")
>>> repo_tree
<generator object HfApi.list_repo_tree at 0x7fa4088e1ac0>
>>> list(repo_tree)
[
RepoFile(path='.gitattributes', size=391, blob_id='ae8c63daedbd4206d7d40126955d4e6ab1c80f8f', lfs=None, last_commit=None, security=None),
RepoFile(path='README.md', size=391, blob_id='43bd404b159de6fba7c2f4d3264347668d43af25', lfs=None, last_commit=None, security=None),
RepoFile(path='config.json', size=554, blob_id='2f9618c3a19b9a61add74f70bfb121335aeef666', lfs=None, last_commit=None, security=None),
RepoFile(
path='flax_model.msgpack', size=497764107, blob_id='8095a62ccb4d806da7666fcda07467e2d150218e',
lfs={'size': 497764107, 'sha256': 'd88b0d6a6ff9c3f8151f9d3228f57092aaea997f09af009eefd7373a77b5abb9', 'pointer_size': 134}, last_commit=None, security=None
),
RepoFile(path='merges.txt', size=456318, blob_id='226b0752cac7789c48f0cb3ec53eda48b7be36cc', lfs=None, last_commit=None, security=None),
RepoFile(
path='pytorch_model.bin', size=548123560, blob_id='64eaa9c526867e404b68f2c5d66fd78e27026523',
lfs={'size': 548123560, 'sha256': '9be78edb5b928eba33aa88f431551348f7466ba9f5ef3daf1d552398722a5436', 'pointer_size': 134}, last_commit=None, security=None
),
RepoFile(path='vocab.json', size=898669, blob_id='b00361fece0387ca34b4b8b8539ed830d644dbeb', lfs=None, last_commit=None, security=None)]
]
Get even more information about a repo’s tree (last commit and files’ security scan results)
>>> from huggingface_hub import list_repo_tree
>>> repo_tree = list_repo_tree("prompthero/openjourney-v4", expand=True)
>>> list(repo_tree)
[
RepoFolder(
path='feature_extractor',
tree_id='aa536c4ea18073388b5b0bc791057a7296a00398',
last_commit={
'oid': '47b62b20b20e06b9de610e840282b7e6c3d51190',
'title': 'Upload diffusers weights (#48)',
'date': datetime.datetime(2023, 3, 21, 9, 5, 27, tzinfo=datetime.timezone.utc)
}
),
RepoFolder(
path='safety_checker',
tree_id='65aef9d787e5557373fdf714d6c34d4fcdd70440',
last_commit={
'oid': '47b62b20b20e06b9de610e840282b7e6c3d51190',
'title': 'Upload diffusers weights (#48)',
'date': datetime.datetime(2023, 3, 21, 9, 5, 27, tzinfo=datetime.timezone.utc)
}
),
RepoFile(
path='model_index.json',
size=582,
blob_id='d3d7c1e8c3e78eeb1640b8e2041ee256e24c9ee1',
lfs=None,
last_commit={
'oid': 'b195ed2d503f3eb29637050a886d77bd81d35f0e',
'title': 'Fix deprecation warning by changing `CLIPFeatureExtractor` to `CLIPImageProcessor`. (#54)',
'date': datetime.datetime(2023, 5, 15, 21, 41, 59, tzinfo=datetime.timezone.utc)
},
security={
'safe': True,
'av_scan': {'virusFound': False, 'virusNames': None},
'pickle_import_scan': None
}
)
...
]
list_spaces
< source >( filter: Union[str, Iterable[str], None] = None author: Optional[str] = None search: Optional[str] = None sort: Union[Literal['last_modified'], str, None] = None direction: Optional[Literal[-1]] = None limit: Optional[int] = None datasets: Union[str, Iterable[str], None] = None models: Union[str, Iterable[str], None] = None linked: bool = False full: Optional[bool] = None token: Optional[str] = None ) → Iterable[SpaceInfo]
Parameters
- filter (
str
orIterable
, optional) — A string tag or list of tags that can be used to identify Spaces on the Hub. - author (
str
, optional) — A string which identify the author of the returned Spaces. - search (
str
, optional) — A string that will be contained in the returned Spaces. - sort (
Literal["last_modified"]
orstr
, optional) — The key with which to sort the resulting Spaces. Possible values are the properties of the huggingface_hub.hf_api.SpaceInfo` class. - direction (
Literal[-1]
orint
, optional) — Direction in which to sort. The value-1
sorts by descending order while all other values sort by ascending order. - limit (
int
, optional) — The limit on the number of Spaces fetched. Leaving this option toNone
fetches all Spaces. - datasets (
str
orIterable
, optional) — Whether to return Spaces that make use of a dataset. The name of a specific dataset can be passed as a string. - models (
str
orIterable
, optional) — Whether to return Spaces that make use of a model. The name of a specific model can be passed as a string. - linked (
bool
, optional) — Whether to return Spaces that make use of either a model or a dataset. - full (
bool
, optional) — Whether to fetch all Spaces data, including thelast_modified
,siblings
andcard_data
fields. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
Iterable[SpaceInfo]
an iterable of huggingface_hub.hf_api.SpaceInfo objects.
List spaces hosted on the Huggingface Hub, given some filters.
merge_pull_request
< source >( repo_id: str discussion_num: int token: Optional[str] = None comment: Optional[str] = None repo_type: Optional[str] = None ) → DiscussionStatusChange
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - discussion_num (
int
) — The number of the Discussion or Pull Request . Must be a strictly positive integer. - comment (
str
, optional) — An optional comment to post with the status change. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token)
Returns
the status change event
Merges a Pull Request.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
model_info
< source >( repo_id: str revision: Optional[str] = None timeout: Optional[float] = None securityStatus: Optional[bool] = None files_metadata: bool = False token: Optional[Union[bool, str]] = None ) → huggingface_hub.hf_api.ModelInfo
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - revision (
str
, optional) — The revision of the model repository from which to get the information. - timeout (
float
, optional) — Whether to set a timeout for the request to the Hub. - securityStatus (
bool
, optional) — Whether to retrieve the security status from the model repository as well. - files_metadata (
bool
, optional) — Whether or not to retrieve metadata for files in the repository (size, LFS metadata, etc). Defaults toFalse
. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
The model repository information.
Get info on one specific model on huggingface.co
Model can be private if you pass an acceptable token or are logged in.
Raises the following errors:
- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access. - RevisionNotFoundError If the revision to download from cannot be found.
move_repo
< source >( from_id: str to_id: str repo_type: Optional[str] = None token: Optional[str] = None )
Parameters
- from_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. Original repository identifier. - to_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. Final repository identifier. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token)
Moving a repository from namespace1/repo_name1 to namespace2/repo_name2
Note there are certain limitations. For more information about moving repositories, please see https://hf.co/docs/hub/repositories-settings#renaming-or-transferring-a-repo.
Raises the following errors:
- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
parse_safetensors_file_metadata
< source >( repo_id: str filename: str repo_type: Optional[str] = None revision: Optional[str] = None token: Optional[str] = None ) → SafetensorsFileMetadata
Parameters
- repo_id (
str
) — A user or an organization name and a repo name separated by a/
. - filename (
str
) — The name of the file in the repo. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if the file is in a dataset or space,None
or"model"
if in a model. Default isNone
. - revision (
str
, optional) — The git revision to fetch the file from. Can be a branch name, a tag, or a commit hash. Defaults to the head of the"main"
branch. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
SafetensorsFileMetadata
information related to a safetensors file.
Raises
- —
NotASafetensorsRepoError
: if the repo is not a safetensors repo i.e. doesn’t have either amodel.safetensors
or amodel.safetensors.index.json
file.
- —
- —
SafetensorsParsingError
: if a safetensors file header couldn’t be parsed correctly.
- —
Parse metadata from a safetensors file on the Hub.
To parse metadata from all safetensors files in a repo at once, use get_safetensors_metadata().
For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.
pause_inference_endpoint
< source >( name: str namespace: Optional[str] = None token: Optional[str] = None ) → InferenceEndpoint
Parameters
- name (
str
) — The name of the Inference Endpoint to pause. - namespace (
str
, optional) — The namespace in which the Inference Endpoint is located. Defaults to the current user. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token).
Returns
information about the paused Inference Endpoint.
Pause an Inference Endpoint.
A paused Inference Endpoint will not be charged. It can be resumed at any time using resume_inference_endpoint(). This is different than scaling the Inference Endpoint to zero with scale_to_zero_inference_endpoint(), which would be automatically restarted when a request is made to it.
For convenience, you can also pause an Inference Endpoint using pause_inference_endpoint().
pause_space
< source >( repo_id: str token: Optional[str] = None ) → SpaceRuntime
Parameters
- repo_id (
str
) — ID of the Space to pause. Example:"Salesforce/BLIP2"
. - token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Returns
Runtime information about your Space including stage=PAUSED
and requested hardware.
Raises
RepositoryNotFoundError or HfHubHTTPError or BadRequestError
- RepositoryNotFoundError — If your Space is not found (error 404). Most probably wrong repo_id or your space is private but you are not authenticated.
- HfHubHTTPError — 403 Forbidden: only the owner of a Space can pause it. If you want to manage a Space that you don’t own, either ask the owner by opening a Discussion or duplicate the Space.
- BadRequestError — If your Space is a static Space. Static Spaces are always running and never billed. If you want to hide a static Space, you can set it to private.
Pause your Space.
A paused Space stops executing until manually restarted by its owner. This is different from the sleeping state in which free Spaces go after 48h of inactivity. Paused time is not billed to your account, no matter the hardware you’ve selected. To restart your Space, use restart_space() and go to your Space settings page.
For more details, please visit the docs.
preupload_lfs_files
< source >( repo_id: str additions: Iterable[CommitOperationAdd] token: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None create_pr: Optional[bool] = None num_threads: int = 5 free_memory: bool = True gitignore_content: Optional[str] = None )
Parameters
- repo_id (
str
) — The repository in which you will commit the files, for example:"username/custom_transformers"
. - operations (
Iterable
of CommitOperationAdd) — The list of files to upload. Warning: the objects in this list will be mutated to include information relative to the upload. Do not reuse the same objects for multiple commits. - token (
str
, optional) — Authentication token. Will default to the stored token. - repo_type (
str
, optional) — The type of repository to upload to (e.g."model"
-default-,"dataset"
or"space"
). - revision (
str
, optional) — The git revision to commit from. Defaults to the head of the"main"
branch. - create_pr (
boolean
, optional) — Whether or not you plan to create a Pull Request with that commit. Defaults toFalse
. - num_threads (
int
, optional) — Number of concurrent threads for uploading files. Defaults to 5. Setting it to 2 means at most 2 files will be uploaded concurrently. - gitignore_content (
str
, optional) — The content of the.gitignore
file to know which files should be ignored. The order of priority is to first check ifgitignore_content
is passed, then check if the.gitignore
file is present in the list of files to commit and finally default to the.gitignore
file already hosted on the Hub (if any).
Pre-upload LFS files to S3 in preparation on a future commit.
This method is useful if you are generating the files to upload on-the-fly and you don’t want to store them in memory before uploading them all at once.
This is a power-user method. You shouldn’t need to call it directly to make a normal commit. Use create_commit() directly instead.
Commit operations will be mutated during the process. In particular, the attached path_or_fileobj
will be
removed after the upload to save memory (and replaced by an empty bytes
object). Do not reuse the same
objects except to pass them to create_commit(). If you don’t want to remove the attached content from the
commit operation object, pass free_memory=False
.
Example:
>>> from huggingface_hub import CommitOperationAdd, preupload_lfs_files, create_commit, create_repo
>>> repo_id = create_repo("test_preupload").repo_id
# Generate and preupload LFS files one by one
>>> operations = [] # List of all `CommitOperationAdd` objects that will be generated
>>> for i in range(5):
... content = ... # generate binary content
... addition = CommitOperationAdd(path_in_repo=f"shard_{i}_of_5.bin", path_or_fileobj=content)
... preupload_lfs_files(repo_id, additions=[addition]) # upload + free memory
... operations.append(addition)
# Create commit
>>> create_commit(repo_id, operations=operations, commit_message="Commit all shards")
reject_access_request
< source >( repo_id: str user: str repo_type: Optional[str] = None token: Optional[str] = None )
Parameters
- repo_id (
str
) — The id of the repo to reject access request for. - user (
str
) — The username of the user which access request should be rejected. - repo_type (
str
, optional) — The type of the repo to reject access request for. Must be one ofmodel
,dataset
orspace
. Defaults tomodel
. - token (
str
, optional) — A valid authentication token (see https://huggingface.co/settings/token).
Raises
HTTPError
HTTPError
— HTTP 400 if the repo is not gated.HTTPError
— HTTP 403 if you only have read-only access to the repo. This can be the case if you don’t havewrite
oradmin
role in the organization the repo belongs to or if you passed aread
token.HTTPError
— HTTP 404 if the user does not exist on the Hub.HTTPError
— HTTP 404 if the user access request cannot be found.HTTPError
— HTTP 404 if the user access request is already in the rejected list.
Reject an access request from a user for a given gated repo.
A rejected request will go to the rejected list. The user cannot download any file of the repo. Rejected requests can be accepted or cancelled at any time using accept_access_request() and cancel_access_request(). A cancelled request will go back to the pending list while an accepted request will go to the accepted list.
For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.
rename_discussion
< source >( repo_id: str discussion_num: int new_title: str token: Optional[str] = None repo_type: Optional[str] = None ) → DiscussionTitleChange
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - discussion_num (
int
) — The number of the Discussion or Pull Request . Must be a strictly positive integer. - new_title (
str
) — The new title for the discussion - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token)
Returns
the title change event
Renames a Discussion.
Examples:
>>> new_title = "New title, fixing a typo"
>>> HfApi().rename_discussion(
... repo_id="username/repo_name",
... discussion_num=34
... new_title=new_title
... )
# DiscussionTitleChange(id='deadbeef0000000', type='title-change', ...)
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
repo_exists
< source >( repo_id: str repo_type: Optional[str] = None token: Optional[str] = None )
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if getting repository info from a dataset or a space,None
or"model"
if getting repository info from a model. Default isNone
. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Checks if a repository exists on the Hugging Face Hub.
repo_info
< source >( repo_id: str revision: Optional[str] = None repo_type: Optional[str] = None timeout: Optional[float] = None files_metadata: bool = False token: Optional[Union[bool, str]] = None ) → Union[SpaceInfo, DatasetInfo, ModelInfo]
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - revision (
str
, optional) — The revision of the repository from which to get the information. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if getting repository info from a dataset or a space,None
or"model"
if getting repository info from a model. Default isNone
. - timeout (
float
, optional) — Whether to set a timeout for the request to the Hub. - files_metadata (
bool
, optional) — Whether or not to retrieve metadata for files in the repository (size, LFS metadata, etc). Defaults toFalse
. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
Union[SpaceInfo, DatasetInfo, ModelInfo]
The repository information, as a huggingface_hub.hf_api.DatasetInfo, huggingface_hub.hf_api.ModelInfo or huggingface_hub.hf_api.SpaceInfo object.
Get the info object for a given repo of a given type.
Raises the following errors:
- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access. - RevisionNotFoundError If the revision to download from cannot be found.
request_space_hardware
< source >( repo_id: str hardware: SpaceHardware token: Optional[str] = None sleep_time: Optional[int] = None ) → SpaceRuntime
Parameters
- repo_id (
str
) — ID of the repo to update. Example:"bigcode/in-the-stack"
. - hardware (
str
or SpaceHardware) — Hardware on which to run the Space. Example:"t4-medium"
. - token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided. - sleep_time (
int
, optional) — Number of seconds of inactivity to wait before a Space is put to sleep. Set to-1
if you don’t want your Space to sleep (default behavior for upgraded hardware). For free hardware, you can’t configure the sleep time (value is fixed to 48 hours of inactivity). See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details.
Returns
Runtime information about a Space including Space stage and hardware.
Request new hardware for a Space.
It is also possible to request hardware directly when creating the Space repo! See create_repo() for details.
request_space_storage
< source >( repo_id: str storage: SpaceStorage token: Optional[str] = None ) → SpaceRuntime
Parameters
- repo_id (
str
) — ID of the Space to update. Example:"HuggingFaceH4/open_llm_leaderboard"
. - storage (
str
or SpaceStorage) — Storage tier. Either ‘small’, ‘medium’, or ‘large’. - token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Returns
Runtime information about a Space including Space stage and hardware.
Request persistent storage for a Space.
It is not possible to decrease persistent storage after its granted. To do so, you must delete it via delete_space_storage().
restart_space
< source >( repo_id: str token: Optional[str] = None factory_reboot: bool = False ) → SpaceRuntime
Parameters
- repo_id (
str
) — ID of the Space to restart. Example:"Salesforce/BLIP2"
. - token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided. - factory_reboot (
bool
, optional) — IfTrue
, the Space will be rebuilt from scratch without caching any requirements.
Returns
Runtime information about your Space.
Raises
RepositoryNotFoundError or HfHubHTTPError or BadRequestError
- RepositoryNotFoundError — If your Space is not found (error 404). Most probably wrong repo_id or your space is private but you are not authenticated.
- HfHubHTTPError — 403 Forbidden: only the owner of a Space can restart it. If you want to restart a Space that you don’t own, either ask the owner by opening a Discussion or duplicate the Space.
- BadRequestError — If your Space is a static Space. Static Spaces are always running and never billed. If you want to hide a static Space, you can set it to private.
Restart your Space.
This is the only way to programmatically restart a Space if you’ve put it on Pause (see pause_space()). You must be the owner of the Space to restart it. If you are using an upgraded hardware, your account will be billed as soon as the Space is restarted. You can trigger a restart no matter the current state of a Space.
For more details, please visit the docs.
resume_inference_endpoint
< source >( name: str namespace: Optional[str] = None token: Optional[str] = None ) → InferenceEndpoint
Parameters
- name (
str
) — The name of the Inference Endpoint to resume. - namespace (
str
, optional) — The namespace in which the Inference Endpoint is located. Defaults to the current user. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token).
Returns
information about the resumed Inference Endpoint.
Resume an Inference Endpoint.
For convenience, you can also resume an Inference Endpoint using InferenceEndpoint.resume().
run_as_future
< source >( fn: Callable[..., R] *args **kwargs ) → Future
Parameters
- fn (
Callable
) — The method to run in the background. - *args, **kwargs — Arguments with which the method will be called.
Returns
Future
a Future instance to get the result of the task.
Run a method in the background and return a Future instance.
The main goal is to run methods without blocking the main thread (e.g. to push data during a training). Background jobs are queued to preserve order but are not ran in parallel. If you need to speed-up your scripts by parallelizing lots of call to the API, you must setup and use your own ThreadPoolExecutor.
Note: Most-used methods like upload_file(), upload_folder() and create_commit() have a run_as_future: bool
argument to directly call them in the background. This is equivalent to calling api.run_as_future(...)
on them
but less verbose.
scale_to_zero_inference_endpoint
< source >( name: str namespace: Optional[str] = None token: Optional[str] = None ) → InferenceEndpoint
Parameters
- name (
str
) — The name of the Inference Endpoint to scale to zero. - namespace (
str
, optional) — The namespace in which the Inference Endpoint is located. Defaults to the current user. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token).
Returns
information about the scaled-to-zero Inference Endpoint.
Scale Inference Endpoint to zero.
An Inference Endpoint scaled to zero will not be charged. It will be resume on the next request to it, with a cold start delay. This is different than pausing the Inference Endpoint with pause_inference_endpoint(), which would require a manual resume with resume_inference_endpoint().
For convenience, you can also scale an Inference Endpoint to zero using InferenceEndpoint.scale_to_zero().
set_space_sleep_time
< source >( repo_id: str sleep_time: int token: Optional[str] = None ) → SpaceRuntime
Parameters
- repo_id (
str
) — ID of the repo to update. Example:"bigcode/in-the-stack"
. - sleep_time (
int
, optional) — Number of seconds of inactivity to wait before a Space is put to sleep. Set to-1
if you don’t want your Space to pause (default behavior for upgraded hardware). For free hardware, you can’t configure the sleep time (value is fixed to 48 hours of inactivity). See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details. - token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Returns
Runtime information about a Space including Space stage and hardware.
Set a custom sleep time for a Space running on upgraded hardware..
Your Space will go to sleep after X seconds of inactivity. You are not billed when your Space is in “sleep” mode. If a new visitor lands on your Space, it will “wake it up”. Only upgraded hardware can have a configurable sleep time. To know more about the sleep stage, please refer to https://huggingface.co/docs/hub/spaces-gpus#sleep-time.
It is also possible to set a custom sleep time when requesting hardware with request_space_hardware().
snapshot_download
< source >( repo_id: str repo_type: Optional[str] = None revision: Optional[str] = None cache_dir: Union[str, Path, None] = None local_dir: Union[str, Path, None] = None local_dir_use_symlinks: Union[bool, Literal['auto']] = 'auto' proxies: Optional[Dict] = None etag_timeout: float = 10 resume_download: bool = False force_download: bool = False token: Optional[Union[str, bool]] = None local_files_only: bool = False allow_patterns: Optional[Union[List[str], str]] = None ignore_patterns: Optional[Union[List[str], str]] = None max_workers: int = 8 tqdm_class: Optional[base_tqdm] = None )
Parameters
- repo_id (
str
) — A user or an organization name and a repo name separated by a/
. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if downloading from a dataset or space,None
or"model"
if downloading from a model. Default isNone
. - revision (
str
, optional) — An optional Git revision id which can be a branch name, a tag, or a commit hash. - cache_dir (
str
,Path
, optional) — Path to the folder where cached files are stored. - local_dir (
str
orPath
, optional) — If provided, the downloaded files will be placed under this directory, either as symlinks (default) or regular files (see description for more details). - local_dir_use_symlinks (
"auto"
orbool
, defaults to"auto"
) — To be used withlocal_dir
. If set to “auto”, the cache directory will be used and the file will be either duplicated or symlinked to the local directory depending on its size. It set toTrue
, a symlink will be created, no matter the file size. If set toFalse
, the file will either be duplicated from cache (if already exists) or downloaded from the Hub and not cached. See description for more details. - proxies (
dict
, optional) — Dictionary mapping protocol to the URL of the proxy passed torequests.request
. - etag_timeout (
float
, optional, defaults to10
) — When fetching ETag, how many seconds to wait for the server to send data before giving up which is passed torequests.request
. - resume_download (
bool
, optional, defaults toFalse) -- If
True`, resume a previously interrupted download. - force_download (
bool
, optional, defaults toFalse
) — Whether the file should be downloaded even if it already exists in the local cache. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header. - local_files_only (
bool
, optional, defaults toFalse
) — IfTrue
, avoid downloading the file and return the path to the local cached file if it exists. - allow_patterns (
List[str]
orstr
, optional) — If provided, only files matching at least one pattern are downloaded. - ignore_patterns (
List[str]
orstr
, optional) — If provided, files matching any of the patterns are not downloaded. - max_workers (
int
, optional) — Number of concurrent threads to download files (1 thread = 1 file download). Defaults to 8. - tqdm_class (
tqdm
, optional) — If provided, overwrites the default behavior for the progress bar. Passed argument must inherit fromtqdm.auto.tqdm
or at least mimic its behavior. Note that thetqdm_class
is not passed to each individual download. Defaults to the custom HF progress bar that can be disabled by settingHF_HUB_DISABLE_PROGRESS_BARS
environment variable.
Download repo files.
Download a whole snapshot of a repo’s files at the specified revision. This is useful when you want all files from
a repo, because you don’t know which ones you will need a priori. All files are nested inside a folder in order
to keep their actual filename relative to that folder. You can also filter which files to download using
allow_patterns
and ignore_patterns
.
If local_dir
is provided, the file structure from the repo will be replicated in this location. You can configure
how you want to move those files:
- If
local_dir_use_symlinks="auto"
(default), files are downloaded and stored in the cache directory as blob files. Small files (<5MB) are duplicated inlocal_dir
while a symlink is created for bigger files. The goal is to be able to manually edit and save small files without corrupting the cache while saving disk space for binary files. The 5MB threshold can be configured with theHF_HUB_LOCAL_DIR_AUTO_SYMLINK_THRESHOLD
environment variable. - If
local_dir_use_symlinks=True
, files are downloaded, stored in the cache directory and symlinked inlocal_dir
. This is optimal in term of disk usage but files must not be manually edited. - If
local_dir_use_symlinks=False
and the blob files exist in the cache directory, they are duplicated in the local dir. This means disk usage is not optimized. - Finally, if
local_dir_use_symlinks=False
and the blob files do not exist in the cache directory, then the files are downloaded and directly placed underlocal_dir
. This means if you need to download them again later, they will be re-downloaded entirely.
An alternative would be to clone the repo but this requires git and git-lfs to be installed and properly configured. It is also not possible to filter which files to download when cloning a repository using git.
Raises the following errors:
EnvironmentError
iftoken=True
and the token cannot be found.OSError
if ETag cannot be determined.ValueError
if some parameter value is invalid
space_info
< source >( repo_id: str revision: Optional[str] = None timeout: Optional[float] = None files_metadata: bool = False token: Optional[Union[bool, str]] = None ) → SpaceInfo
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - revision (
str
, optional) — The revision of the space repository from which to get the information. - timeout (
float
, optional) — Whether to set a timeout for the request to the Hub. - files_metadata (
bool
, optional) — Whether or not to retrieve metadata for files in the repository (size, LFS metadata, etc). Defaults toFalse
. - token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
The space repository information.
Get info on one specific Space on huggingface.co.
Space can be private if you pass an acceptable token.
Raises the following errors:
- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access. - RevisionNotFoundError If the revision to download from cannot be found.
super_squash_history
< source >( repo_id: str branch: Optional[str] = None commit_message: Optional[str] = None repo_type: Optional[str] = None token: Optional[str] = None )
Parameters
- repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. - branch (
str
, optional) — The branch to squash. Defaults to the head of the"main"
branch. - commit_message (
str
, optional) — The commit message to use for the squashed commit. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if listing commits from a dataset or a Space,None
or"model"
if listing from a model. Default isNone
. - token (
str
, optional) — A valid authentication token (see https://huggingface.co/settings/token). If the machine is logged in (throughhuggingface-cli login
or login()), token can be automatically retrieved from the cache.
Raises
RepositoryNotFoundError or RevisionNotFoundError or BadRequestError
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
- RevisionNotFoundError — If the branch to squash cannot be found.
- BadRequestError — If invalid reference for a branch. You cannot squash history on tags.
Squash commit history on a branch for a repo on the Hub.
Squashing the repo history is useful when you know you’ll make hundreds of commits and you don’t want to clutter the history. Squashing commits can only be performed from the head of a branch.
Once squashed, the commit history cannot be retrieved. This is a non-revertible operation.
Once the history of a branch has been squashed, it is not possible to merge it back into another branch since their history will have diverged.
Example:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
# Create repo
>>> repo_id = api.create_repo("test-squash").repo_id
# Make a lot of commits.
>>> api.upload_file(repo_id=repo_id, path_in_repo="file.txt", path_or_fileobj=b"content")
>>> api.upload_file(repo_id=repo_id, path_in_repo="lfs.bin", path_or_fileobj=b"content")
>>> api.upload_file(repo_id=repo_id, path_in_repo="file.txt", path_or_fileobj=b"another_content")
# Squash history
>>> api.super_squash_history(repo_id=repo_id)
unlike
< source >( repo_id: str token: Optional[str] = None repo_type: Optional[str] = None )
Parameters
- repo_id (
str
) — The repository to unlike. Example:"user/my-cool-model"
. - token (
str
, optional) — Authentication token. Will default to the stored token. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if unliking a dataset or space,None
or"model"
if unliking a model. Default isNone
.
Raises
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
Unlike a given repo on the Hub (e.g. remove from favorite list).
See also like() and list_liked_repos().
update_collection_item
< source >( collection_slug: str item_object_id: str note: Optional[str] = None position: Optional[int] = None token: Optional[str] = None )
Parameters
- collection_slug (
str
) — Slug of the collection to update. Example:"TheBloke/recent-models-64f9a55bb3115b4f513ec026"
. - item_object_id (
str
) — ID of the item in the collection. This is not the id of the item on the Hub (repo_id or paper id). It must be retrieved from a CollectionItem object. Example:collection.items[0].item_object_id
. - note (
str
, optional) — A note to attach to the item in the collection. The maximum size for a note is 500 characters. - position (
int
, optional) — New position of the item in the collection. - token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Update an item in a collection.
Example:
>>> from huggingface_hub import get_collection, update_collection_item
# Get collection first
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")
# Update item based on its ID (add note + update position)
>>> update_collection_item(
... collection_slug="TheBloke/recent-models-64f9a55bb3115b4f513ec026",
... item_object_id=collection.items[-1].item_object_id,
... note="Newly updated model!"
... position=0,
... )
update_collection_metadata
< source >( collection_slug: str title: Optional[str] = None description: Optional[str] = None position: Optional[int] = None private: Optional[bool] = None theme: Optional[str] = None token: Optional[str] = None )
Parameters
- collection_slug (
str
) — Slug of the collection to update. Example:"TheBloke/recent-models-64f9a55bb3115b4f513ec026"
. - title (
str
) — Title of the collection to update. - description (
str
, optional) — Description of the collection to update. - position (
int
, optional) — New position of the collection in the list of collections of the user. - private (
bool
, optional) — Whether the collection should be private or not. - theme (
str
, optional) — Theme of the collection on the Hub. - token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Update metadata of a collection on the Hub.
All arguments are optional. Only provided metadata will be updated.
Returns: Collection
Example:
>>> from huggingface_hub import update_collection_metadata
>>> collection = update_collection_metadata(
... collection_slug="username/iccv-2023-64f9a55bb3115b4f513ec026",
... title="ICCV Oct. 2023"
... description="Portfolio of models, datasets, papers and demos I presented at ICCV Oct. 2023",
... private=False,
... theme="pink",
... )
>>> collection.slug
"username/iccv-oct-2023-64f9a55bb3115b4f513ec026"
# ^collection slug got updated but not the trailing ID
update_inference_endpoint
< source >( name: str accelerator: Optional[str] = None instance_size: Optional[str] = None instance_type: Optional[str] = None min_replica: Optional[int] = None max_replica: Optional[int] = None repository: Optional[str] = None framework: Optional[str] = None revision: Optional[str] = None task: Optional[str] = None namespace: Optional[str] = None token: Optional[str] = None ) → InferenceEndpoint
Parameters
- name (
str
) — The name of the Inference Endpoint to update. - accelerator (
str
, optional) — The hardware accelerator to be used for inference (e.g."cpu"
). - instance_size (
str
, optional) — The size or type of the instance to be used for hosting the model (e.g."large"
). - instance_type (
str
, optional) — The cloud instance type where the Inference Endpoint will be deployed (e.g."c6i"
). - min_replica (
int
, optional) — The minimum number of replicas (instances) to keep running for the Inference Endpoint. - max_replica (
int
, optional) — The maximum number of replicas (instances) to scale to for the Inference Endpoint. - repository (
str
, optional) — The name of the model repository associated with the Inference Endpoint (e.g."gpt2"
). - framework (
str
, optional) — The machine learning framework used for the model (e.g."custom"
). - revision (
str
, optional) — The specific model revision to deploy on the Inference Endpoint (e.g."6c0e6080953db56375760c0471a8c5f2929baf11"
). - task (
str
, optional) — The task on which to deploy the model (e.g."text-classification"
). - namespace (
str
, optional) — The namespace where the Inference Endpoint will be updated. Defaults to the current user’s namespace. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token).
Returns
information about the updated Inference Endpoint.
Update an Inference Endpoint.
This method allows the update of either the compute configuration, the deployed model, or both. All arguments are optional but at least one must be provided.
For convenience, you can also update an Inference Endpoint using InferenceEndpoint.update().
update_repo_visibility
< source >( repo_id: str private: bool = False token: Optional[str] = None organization: Optional[str] = None repo_type: Optional[str] = None name: Optional[str] = None )
Parameters
- repo_id (
str
, optional) — A namespace (user or an organization) and a repo name separated by a/
. - private (
bool
, optional, defaults toFalse
) — Whether the model repo should be private. - token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token) - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
.
Update the visibility setting of a repository.
Raises the following errors:
- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
upload_file
< source >( path_or_fileobj: Union[str, Path, bytes, BinaryIO] path_in_repo: str repo_id: str token: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None commit_message: Optional[str] = None commit_description: Optional[str] = None create_pr: Optional[bool] = None parent_commit: Optional[str] = None run_as_future: bool = False ) → CommitInfo or Future
Parameters
- path_or_fileobj (
str
,Path
,bytes
, orIO
) — Path to a file on the local machine or binary data stream / fileobj / buffer. - path_in_repo (
str
) — Relative filepath in the repo, for example:"checkpoints/1fec34a/weights.bin"
- repo_id (
str
) — The repository to which the file will be uploaded, for example:"username/custom_transformers"
- token (
str
, optional) — Authentication token, obtained withHfApi.login
method. Will default to the stored token. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. - revision (
str
, optional) — The git revision to commit from. Defaults to the head of the"main"
branch. - commit_message (
str
, optional) — The summary / title / first line of the generated commit - commit_description (
str
optional) — The description of the generated commit - create_pr (
boolean
, optional) — Whether or not to create a Pull Request with that commit. Defaults toFalse
. Ifrevision
is not set, PR is opened against the"main"
branch. Ifrevision
is set and is a branch, PR is opened against this branch. Ifrevision
is set and is not a branch name (example: a commit oid), anRevisionNotFoundError
is returned by the server. - parent_commit (
str
, optional) — The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. If specified andcreate_pr
isFalse
, the commit will fail ifrevision
does not point toparent_commit
. If specified andcreate_pr
isTrue
, the pull request will be created fromparent_commit
. Specifyingparent_commit
ensures the repo has not changed before committing the changes, and can be especially useful if the repo is updated / committed to concurrently. - run_as_future (
bool
, optional) — Whether or not to run this method in the background. Background jobs are run sequentially without blocking the main thread. Passingrun_as_future=True
will return a Future object. Defaults toFalse
.
Returns
CommitInfo or Future
Instance of CommitInfo containing information about the newly created commit (commit hash, commit
url, pr url, commit message,…). If run_as_future=True
is passed, returns a Future object which will
contain the result when executed.
Upload a local file (up to 50 GB) to the given repo. The upload is done through a HTTP post request, and doesn’t require git or git-lfs to be installed.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access. - RevisionNotFoundError If the revision to download from cannot be found.
upload_file
assumes that the repo already exists on the Hub. If you get a
Client error 404, please make sure you are authenticated and that repo_id
and
repo_type
are set correctly. If repo does not exist, create it first using
create_repo().
Example:
>>> from huggingface_hub import upload_file
>>> with open("./local/filepath", "rb") as fobj:
... upload_file(
... path_or_fileobj=fileobj,
... path_in_repo="remote/file/path.h5",
... repo_id="username/my-dataset",
... repo_type="dataset",
... token="my_token",
... )
"https://huggingface.co/datasets/username/my-dataset/blob/main/remote/file/path.h5"
>>> upload_file(
... path_or_fileobj=".\\local\\file\\path",
... path_in_repo="remote/file/path.h5",
... repo_id="username/my-model",
... token="my_token",
... )
"https://huggingface.co/username/my-model/blob/main/remote/file/path.h5"
>>> upload_file(
... path_or_fileobj=".\\local\\file\\path",
... path_in_repo="remote/file/path.h5",
... repo_id="username/my-model",
... token="my_token",
... create_pr=True,
... )
"https://huggingface.co/username/my-model/blob/refs%2Fpr%2F1/remote/file/path.h5"
upload_folder
< source >( repo_id: str folder_path: Union[str, Path] path_in_repo: Optional[str] = None commit_message: Optional[str] = None commit_description: Optional[str] = None token: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None create_pr: Optional[bool] = None parent_commit: Optional[str] = None allow_patterns: Optional[Union[List[str], str]] = None ignore_patterns: Optional[Union[List[str], str]] = None delete_patterns: Optional[Union[List[str], str]] = None multi_commits: bool = False multi_commits_verbose: bool = False run_as_future: bool = False ) → CommitInfo or Future
Parameters
- repo_id (
str
) — The repository to which the file will be uploaded, for example:"username/custom_transformers"
- folder_path (
str
orPath
) — Path to the folder to upload on the local file system - path_in_repo (
str
, optional) — Relative path of the directory in the repo, for example:"checkpoints/1fec34a/results"
. Will default to the root folder of the repository. - token (
str
, optional) — Authentication token, obtained withHfApi.login
method. Will default to the stored token. - repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. - revision (
str
, optional) — The git revision to commit from. Defaults to the head of the"main"
branch. - commit_message (
str
, optional) — The summary / title / first line of the generated commit. Defaults to:f"Upload {path_in_repo} with huggingface_hub"
- commit_description (
str
optional) — The description of the generated commit - create_pr (
boolean
, optional) — Whether or not to create a Pull Request with that commit. Defaults toFalse
. Ifrevision
is not set, PR is opened against the"main"
branch. Ifrevision
is set and is a branch, PR is opened against this branch. Ifrevision
is set and is not a branch name (example: a commit oid), anRevisionNotFoundError
is returned by the server. If bothmulti_commits
andcreate_pr
are True, the PR created in the multi-commit process is kept opened. - parent_commit (
str
, optional) — The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. If specified andcreate_pr
isFalse
, the commit will fail ifrevision
does not point toparent_commit
. If specified andcreate_pr
isTrue
, the pull request will be created fromparent_commit
. Specifyingparent_commit
ensures the repo has not changed before committing the changes, and can be especially useful if the repo is updated / committed to concurrently. - allow_patterns (
List[str]
orstr
, optional) — If provided, only files matching at least one pattern are uploaded. - ignore_patterns (
List[str]
orstr
, optional) — If provided, files matching any of the patterns are not uploaded. - delete_patterns (
List[str]
orstr
, optional) — If provided, remote files matching any of the patterns will be deleted from the repo while committing new files. This is useful if you don’t know which files have already been uploaded. Note: to avoid discrepancies the.gitattributes
file is not deleted even if it matches the pattern. - multi_commits (
bool
) — If True, changes are pushed to a PR using a multi-commit process. Defaults toFalse
. - multi_commits_verbose (
bool
) — If True andmulti_commits
is used, more information will be displayed to the user. - run_as_future (
bool
, optional) — Whether or not to run this method in the background. Background jobs are run sequentially without blocking the main thread. Passingrun_as_future=True
will return a Future object. Defaults toFalse
.
Returns
CommitInfo or Future
Instance of CommitInfo containing information about the newly created commit (commit hash, commit
url, pr url, commit message,…). If run_as_future=True
is passed, returns a Future object which will
contain the result when executed.
str
or Future
:
If multi_commits=True
, returns the url of the PR created to push the changes. If run_as_future=True
is passed, returns a Future object which will contain the result when executed.
Upload a local folder to the given repo. The upload is done through a HTTP requests, and doesn’t require git or git-lfs to be installed.
The structure of the folder will be preserved. Files with the same name already present in the repository will be overwritten. Others will be left untouched.
Use the allow_patterns
and ignore_patterns
arguments to specify which files to upload. These parameters
accept either a single pattern or a list of patterns. Patterns are Standard Wildcards (globbing patterns) as
documented here. If both allow_patterns
and
ignore_patterns
are provided, both constraints apply. By default, all files from the folder are uploaded.
Use the delete_patterns
argument to specify remote files you want to delete. Input type is the same as for
allow_patterns
(see above). If path_in_repo
is also provided, the patterns are matched against paths
relative to this folder. For example, upload_folder(..., path_in_repo="experiment", delete_patterns="logs/*")
will delete any remote file under ./experiment/logs/
. Note that the .gitattributes
file will not be deleted
even if it matches the patterns.
Any .git/
folder present in any subdirectory will be ignored. However, please be aware that the .gitignore
file is not taken into account.
Uses HfApi.create_commit
under the hood.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid
upload_folder
assumes that the repo already exists on the Hub. If you get a Client error 404, please make
sure you are authenticated and that repo_id
and repo_type
are set correctly. If repo does not exist, create
it first using create_repo().
multi_commits
is experimental. Its API and behavior is subject to change in the future without prior notice.
Example:
# Upload checkpoints folder except the log files
>>> upload_folder(
... folder_path="local/checkpoints",
... path_in_repo="remote/experiment/checkpoints",
... repo_id="username/my-dataset",
... repo_type="datasets",
... token="my_token",
... ignore_patterns="**/logs/*.txt",
... )
# "https://huggingface.co/datasets/username/my-dataset/tree/main/remote/experiment/checkpoints"
# Upload checkpoints folder including logs while deleting existing logs from the repo
# Useful if you don't know exactly which log files have already being pushed
>>> upload_folder(
... folder_path="local/checkpoints",
... path_in_repo="remote/experiment/checkpoints",
... repo_id="username/my-dataset",
... repo_type="datasets",
... token="my_token",
... delete_patterns="**/logs/*.txt",
... )
"https://huggingface.co/datasets/username/my-dataset/tree/main/remote/experiment/checkpoints"
# Upload checkpoints folder while creating a PR
>>> upload_folder(
... folder_path="local/checkpoints",
... path_in_repo="remote/experiment/checkpoints",
... repo_id="username/my-dataset",
... repo_type="datasets",
... token="my_token",
... create_pr=True,
... )
"https://huggingface.co/datasets/username/my-dataset/tree/refs%2Fpr%2F1/remote/experiment/checkpoints"
whoami
< source >( token: Optional[str] = None )
Call HF API to know “whoami”.
huggingface_hub.plan_multi_commits
< source >( operations: typing.Iterable[typing.Union[huggingface_hub._commit_api.CommitOperationAdd, huggingface_hub._commit_api.CommitOperationDelete]] max_operations_per_commit: int = 50 max_upload_size_per_commit: int = 2147483648 ) → Tuple[List[List[CommitOperationAdd]], List[List[CommitOperationDelete]]]
Parameters
- operations (
List
ofCommitOperation()
) — The list of operations to split into commits. - max_operations_per_commit (
int
) — Maximum number of operations in a single commit. Defaults to 50. - max_upload_size_per_commit (
int
) — Maximum size to upload (in bytes) in a single commit. Defaults to 2GB. Files bigger than this limit are uploaded, 1 per commit.
Returns
Tuple[List[List[CommitOperationAdd]], List[List[CommitOperationDelete]]]
a tuple. First item is a list of lists of CommitOperationAdd representing the addition commits to push. The second item is a list of lists of CommitOperationDelete representing the deletion commits.
Split a list of operations in a list of commits to perform.
Implementation follows a sub-optimal (yet simple) algorithm:
- Delete operations are grouped together by commits of maximum
max_operations_per_commits
operations. - All additions exceeding
max_upload_size_per_commit
are committed 1 by 1. - All remaining additions are grouped together and split each time the
max_operations_per_commit
or themax_upload_size_per_commit
limit is reached.
We do not try to optimize the splitting to get the lowest number of commits as this is a NP-hard problem (see bin packing problem). For our use case, it is not problematic to use a sub-optimal solution so we favored an easy-to-explain implementation.
plan_multi_commits
is experimental. Its API and behavior is subject to change in the future without prior notice.
Example:
>>> from huggingface_hub import HfApi, plan_multi_commits
>>> addition_commits, deletion_commits = plan_multi_commits(
... operations=[
... CommitOperationAdd(...),
... CommitOperationAdd(...),
... CommitOperationDelete(...),
... CommitOperationDelete(...),
... CommitOperationAdd(...),
... ],
... )
>>> HfApi().create_commits_on_pr(
... repo_id="my-cool-model",
... addition_commits=addition_commits,
... deletion_commits=deletion_commits,
... (...)
... verbose=True,
... )
The initial order of the operations is not guaranteed! All deletions will be performed before additions. If you are not updating multiple times the same file, you are fine.
API Dataclasses
AccessRequest
class huggingface_hub.hf_api.AccessRequest
< source >( username: str fullname: str email: str timestamp: datetime status: Literal[('pending', 'accepted', 'rejected')] fields: Optional[Dict[str, Any]] = None )
Parameters
- username (
str
) — Username of the user who requested access. - fullname (
str
) — Fullname of the user who requested access. - email (
str
) — Email of the user who requested access. - timestamp (
datetime
) — Timestamp of the request. - status (
Literal["pending", "accepted", "rejected"]
) — Status of the request. Can be one of["pending", "accepted", "rejected"]
. - fields (
Dict[str, Any]
, optional) — Additional fields filled by the user in the gate form.
Data structure containing information about a user access request.
CommitInfo
class huggingface_hub.CommitInfo
< source >( *args commit_url: str _url: Optional[str] = None **kwargs )
Parameters
- commit_url (
str
) — Url where to find the commit. - commit_message (
str
) — The summary (first line) of the commit that has been created. - commit_description (
str
) — Description of the commit that has been created. Can be empty. - oid (
str
) — Commit hash id. Example:"91c54ad1727ee830252e457677f467be0bfd8a57"
. - pr_url (
str
, optional) — Url to the PR that has been created, if any. Populated whencreate_pr=True
is passed. - pr_revision (
str
, optional) — Revision of the PR that has been created, if any. Populated whencreate_pr=True
is passed. Example:"refs/pr/1"
. - pr_num (
int
, optional) — Number of the PR discussion that has been created, if any. Populated whencreate_pr=True
is passed. Can be passed asdiscussion_num
in get_discussion_details(). Example:1
. - _url (
str
, optional) — Legacy url forstr
compatibility. Can be the url to the uploaded file on the Hub (if returned by upload_file()), to the uploaded folder on the Hub (if returned by upload_folder()) or to the commit on the Hub (if returned by create_commit()). Defaults tocommit_url
. It is deprecated to use this attribute. Please usecommit_url
instead.
Data structure containing information about a newly created commit.
Returned by any method that creates a commit on the Hub: create_commit(), upload_file(), upload_folder(),
delete_file(), delete_folder(). It inherits from str
for backward compatibility but using methods specific
to str
is deprecated.
DatasetInfo
class huggingface_hub.hf_api.DatasetInfo
< source >( **kwargs )
Parameters
- id (
str
) — ID of dataset. - author (
str
) — Author of the dataset. - sha (
str
) — Repo SHA at this particular revision. - created_at (
datetime
, optional) — Date of creation of the repo on the Hub. Note that the lowest value is2022-03-02T23:29:04.000Z
, corresponding to the date when we began to store creation dates. - last_modified (
datetime
, optional) — Date of last commit to the repo. - private (
bool
) — Is the repo private. - disabled (
bool
, optional) — Is the repo disabled. - gated (
Literal["auto", "manual", False]
, optional) — Is the repo gated. If so, whether there is manual or automatic approval. - downloads (
int
) — Number of downloads of the dataset. - likes (
int
) — Number of likes of the dataset. - tags (
List[str]
) — List of tags of the dataset. - card_data (
DatasetCardData
, optional) — Model Card Metadata as a huggingface_hub.repocard_data.DatasetCardData object. - siblings (
List[RepoSibling]
) — List of huggingface_hub.hf_api.RepoSibling objects that constitute the dataset.
Contains information about a dataset on the Hub.
Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made. In general, the more specific the query, the more information is returned. On the contrary, when listing datasets using list_datasets() only a subset of the attributes are returned.
GitRefInfo
class huggingface_hub.GitRefInfo
< source >( name: str ref: str target_commit: str )
Contains information about a git reference for a repo on the Hub.
GitCommitInfo
class huggingface_hub.GitCommitInfo
< source >( commit_id: str authors: List[str] created_at: datetime title: str message: str formatted_title: Optional[str] formatted_message: Optional[str] )
Parameters
- commit_id (
str
) — OID of the commit (e.g."e7da7f221d5bf496a48136c0cd264e630fe9fcc8"
) - authors (
List[str]
) — List of authors of the commit. - created_at (
datetime
) — Datetime when the commit was created. - title (
str
) — Title of the commit. This is a free-text value entered by the authors. - message (
str
) — Description of the commit. This is a free-text value entered by the authors. - formatted_title (
str
) — Title of the commit formatted as HTML. Only returned ifformatted=True
is set. - formatted_message (
str
) — Description of the commit formatted as HTML. Only returned ifformatted=True
is set.
Contains information about a git commit for a repo on the Hub. Check out list_repo_commits() for more details.
GitRefs
class huggingface_hub.GitRefs
< source >( branches: List[GitRefInfo] converts: List[GitRefInfo] tags: List[GitRefInfo] pull_requests: Optional[List[GitRefInfo]] )
Parameters
- branches (
List[GitRefInfo]
) — A list of GitRefInfo containing information about branches on the repo. - converts (
List[GitRefInfo]
) — A list of GitRefInfo containing information about “convert” refs on the repo. Converts are refs used (internally) to push preprocessed data in Dataset repos. - tags (
List[GitRefInfo]
) — A list of GitRefInfo containing information about tags on the repo. - pull_requests (
List[GitRefInfo]
, optional) — A list of GitRefInfo containing information about pull requests on the repo. Only returned ifinclude_prs=True
is set.
Contains information about all git references for a repo on the Hub.
Object is returned by list_repo_refs().
ModelInfo
class huggingface_hub.hf_api.ModelInfo
< source >( **kwargs )
Parameters
- id (
str
) — ID of model. - author (
str
, optional) — Author of the model. - sha (
str
, optional) — Repo SHA at this particular revision. - created_at (
datetime
, optional) — Date of creation of the repo on the Hub. Note that the lowest value is2022-03-02T23:29:04.000Z
, corresponding to the date when we began to store creation dates. - last_modified (
datetime
, optional) — Date of last commit to the repo. - private (
bool
) — Is the repo private. - disabled (
bool
, optional) — Is the repo disabled. - gated (
Literal["auto", "manual", False]
, optional) — Is the repo gated. If so, whether there is manual or automatic approval. - downloads (
int
) — Number of downloads of the model. - likes (
int
) — Number of likes of the model. - library_name (
str
, optional) — Library associated with the model. - tags (
List[str]
) — List of tags of the model. Compared tocard_data.tags
, contains extra tags computed by the Hub (e.g. supported libraries, model’s arXiv). - pipeline_tag (
str
, optional) — Pipeline tag associated with the model. - mask_token (
str
, optional) — Mask token used by the model. - widget_data (
Any
, optional) — Widget data associated with the model. - model_index (
Dict
, optional) — Model index for evaluation. - config (
Dict
, optional) — Model configuration. - transformers_info (
TransformersInfo
, optional) — Transformers-specific info (auto class, processor, etc.) associated with the model. - card_data (
ModelCardData
, optional) — Model Card Metadata as a huggingface_hub.repocard_data.ModelCardData object. - siblings (
List[RepoSibling]
) — List of huggingface_hub.hf_api.RepoSibling objects that constitute the model. - spaces (
List[str]
, optional) — List of spaces using the model. - safetensors (
SafeTensorsInfo
, optional) — Model’s safetensors information.
Contains information about a model on the Hub.
Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made. In general, the more specific the query, the more information is returned. On the contrary, when listing models using list_models() only a subset of the attributes are returned.
RepoSibling
class huggingface_hub.hf_api.RepoSibling
< source >( rfilename: str size: Optional[int] = None blob_id: Optional[str] = None lfs: Optional[BlobLfsInfo] = None )
Parameters
- rfilename (str) — file name, relative to the repo root.
- size (
int
, optional) — The file’s size, in bytes. This attribute is defined whenfiles_metadata
argument of repo_info() is set toTrue
. It’sNone
otherwise. - blob_id (
str
, optional) — The file’s git OID. This attribute is defined whenfiles_metadata
argument of repo_info() is set toTrue
. It’sNone
otherwise. - lfs (
BlobLfsInfo
, optional) — The file’s LFS metadata. This attribute is defined whenfiles_metadata
argument of repo_info() is set toTrue
and the file is stored with Git LFS. It’sNone
otherwise.
Contains basic information about a repo file inside a repo on the Hub.
All attributes of this class are optional except rfilename
. This is because only the file names are returned when
listing repositories on the Hub (with list_models(), list_datasets() or list_spaces()). If you need more
information like file size, blob id or lfs details, you must request them specifically from one repo at a time
(using model_info(), dataset_info() or space_info()) as it adds more constraints on the backend server to
retrieve these.
RepoFile
class huggingface_hub.hf_api.RepoFile
< source >( **kwargs )
Parameters
- path (str) — file path relative to the repo root.
- size (
int
) — The file’s size, in bytes. - blob_id (
str
) — The file’s git OID. - lfs (
BlobLfsInfo
) — The file’s LFS metadata. - last_commit (
LastCommitInfo
, optional) — The file’s last commit metadata. Only defined if list_files_info(), list_repo_tree() and get_paths_info() are called withexpand=True
. - security (
BlobSecurityInfo
, optional) — The file’s security scan metadata. Only defined if list_files_info(), list_repo_tree() and get_paths_info() are called withexpand=True
.
Contains information about a file on the Hub.
RepoUrl
class huggingface_hub.RepoUrl
< source >( url: Any endpoint: Optional[str] = None )
Parameters
- url (
Any
) — String value of the repo url. - endpoint (
str
, optional) — Endpoint of the Hub. Defaults to https://huggingface.co.
Raises
- —
ValueError
If URL cannot be parsed.
- —
- —
ValueError
Ifrepo_type
is unknown.
- —
Subclass of str
describing a repo URL on the Hub.
RepoUrl
is returned by HfApi.create_repo
. It inherits from str
for backward
compatibility. At initialization, the URL is parsed to populate properties:
- endpoint (
str
) - namespace (
Optional[str]
) - repo_name (
str
) - repo_id (
str
) - repo_type (
Literal["model", "dataset", "space"]
) - url (
str
)
Example:
>>> RepoUrl('https://huggingface.co/gpt2')
RepoUrl('https://huggingface.co/gpt2', endpoint='https://huggingface.co', repo_type='model', repo_id='gpt2')
>>> RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co')
RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co', repo_type='dataset', repo_id='dummy_user/dummy_dataset')
>>> RepoUrl('hf://datasets/my-user/my-dataset')
RepoUrl('hf://datasets/my-user/my-dataset', endpoint='https://huggingface.co', repo_type='dataset', repo_id='user/dataset')
>>> HfApi.create_repo("dummy_model")
RepoUrl('https://huggingface.co/Wauplin/dummy_model', endpoint='https://huggingface.co', repo_type='model', repo_id='Wauplin/dummy_model')
SafetensorsRepoMetadata
class huggingface_hub.utils.SafetensorsRepoMetadata
< source >( metadata: typing.Optional[typing.Dict] sharded: bool weight_map: typing.Dict[str, str] files_metadata: typing.Dict[str, huggingface_hub.utils._safetensors.SafetensorsFileMetadata] )
Parameters
- metadata (
Dict
, optional) — The metadata contained in the ‘model.safetensors.index.json’ file, if it exists. Only populated for sharded models. - sharded (
bool
) — Whether the repo contains a sharded model or not. - weight_map (
Dict[str, str]
) — A map of all weights. Keys are tensor names and values are filenames of the files containing the tensors. - files_metadata (
Dict[str, SafetensorsFileMetadata]
) — A map of all files metadata. Keys are filenames and values are the metadata of the corresponding file, as aSafetensorsFileMetadata
object. - parameter_count (
Dict[str, int]
) — A map of the number of parameters per data type. Keys are data types and values are the number of parameters of that data type.
Metadata for a Safetensors repo.
A repo is considered to be a Safetensors repo if it contains either a ‘model.safetensors’ weight file (non-shared model) or a ‘model.safetensors.index.json’ index file (sharded model) at its root.
This class is returned by get_safetensors_metadata().
For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.
SafetensorsFileMetadata
class huggingface_hub.utils.SafetensorsFileMetadata
< source >( metadata: typing.Dict[str, str] tensors: typing.Dict[str, huggingface_hub.utils._safetensors.TensorInfo] )
Parameters
- metadata (
Dict
) — The metadata contained in the file. - tensors (
Dict[str, TensorInfo]
) — A map of all tensors. Keys are tensor names and values are information about the corresponding tensor, as aTensorInfo
object. - parameter_count (
Dict[str, int]
) — A map of the number of parameters per data type. Keys are data types and values are the number of parameters of that data type.
Metadata for a Safetensors file hosted on the Hub.
This class is returned by parse_safetensors_file_metadata().
For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.
SpaceInfo
class huggingface_hub.hf_api.SpaceInfo
< source >( **kwargs )
Parameters
- id (
str
) — ID of the Space. - author (
str
, optional) — Author of the Space. - sha (
str
, optional) — Repo SHA at this particular revision. - created_at (
datetime
, optional) — Date of creation of the repo on the Hub. Note that the lowest value is2022-03-02T23:29:04.000Z
, corresponding to the date when we began to store creation dates. - last_modified (
datetime
, optional) — Date of last commit to the repo. - private (
bool
) — Is the repo private. - gated (
Literal["auto", "manual", False]
, optional) — Is the repo gated. If so, whether there is manual or automatic approval. - disabled (
bool
, optional) — Is the Space disabled. - host (
str
, optional) — Host URL of the Space. - subdomain (
str
, optional) — Subdomain of the Space. - likes (
int
) — Number of likes of the Space. - tags (
List[str]
) — List of tags of the Space. - siblings (
List[RepoSibling]
) — List of huggingface_hub.hf_api.RepoSibling objects that constitute the Space. - card_data (
SpaceCardData
, optional) — Space Card Metadata as a huggingface_hub.repocard_data.SpaceCardData object. - runtime (
SpaceRuntime
, optional) — Space runtime information as a huggingface_hub.hf_api.SpaceRuntime object. - sdk (
str
, optional) — SDK used by the Space. - models (
List[str]
, optional) — List of models used by the Space. - datasets (
List[str]
, optional) — List of datasets used by the Space.
Contains information about a Space on the Hub.
Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made. In general, the more specific the query, the more information is returned. On the contrary, when listing spaces using list_spaces() only a subset of the attributes are returned.
TensorInfo
class huggingface_hub.utils.TensorInfo
< source >( dtype: typing.Literal['F64', 'F32', 'F16', 'BF16', 'I64', 'I32', 'I16', 'I8', 'U8', 'BOOL'] shape: typing.List[int] data_offsets: typing.Tuple[int, int] )
Parameters
- dtype (
str
) — The data type of the tensor (“F64”, “F32”, “F16”, “BF16”, “I64”, “I32”, “I16”, “I8”, “U8”, “BOOL”). - shape (
List[int]
) — The shape of the tensor. - data_offsets (
Tuple[int, int]
) — The offsets of the data in the file as a tuple[BEGIN, END]
. - parameter_count (
int
) — The number of parameters in the tensor.
Information about a tensor.
For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.
User
class huggingface_hub.User
< source >( avatar_url: str username: str fullname: str )
Contains information about a user on the Hub.
UserLikes
class huggingface_hub.UserLikes
< source >( user: str total: int datasets: List[str] models: List[str] spaces: List[str] )
Parameters
- user (
str
) — Name of the user for which we fetched the likes. - total (
int
) — Total number of likes. - datasets (
List[str]
) — List of datasets liked by the user (as repo_ids). - models (
List[str]
) — List of models liked by the user (as repo_ids). - spaces (
List[str]
) — List of spaces liked by the user (as repo_ids).
Contains information about a user likes on the Hub.
CommitOperation
Below are the supported values for CommitOperation()
:
class huggingface_hub.CommitOperationAdd
< source >( path_in_repo: str path_or_fileobj: typing.Union[str, pathlib.Path, bytes, typing.BinaryIO] )
Parameters
- path_in_repo (
str
) — Relative filepath in the repo, for example:"checkpoints/1fec34a/weights.bin"
- path_or_fileobj (
str
,Path
,bytes
, orBinaryIO
) — Either:- a path to a local file (as
str
orpathlib.Path
) to upload - a buffer of bytes (
bytes
) holding the content of the file to upload - a “file object” (subclass of
io.BufferedIOBase
), typically obtained withopen(path, "rb")
. It must supportseek()
andtell()
methods.
- a path to a local file (as
Raises
ValueError
ValueError
— Ifpath_or_fileobj
is not one ofstr
,Path
,bytes
orio.BufferedIOBase
.ValueError
— Ifpath_or_fileobj
is astr
orPath
but not a path to an existing file.ValueError
— Ifpath_or_fileobj
is aio.BufferedIOBase
but it doesn’t support bothseek()
andtell()
.
Data structure holding necessary info to upload a file to a repository on the Hub.
as_file
< source >( with_tqdm: bool = False )
A context manager that yields a file-like object allowing to read the underlying
data behind path_or_fileobj
.
Example:
>>> operation = CommitOperationAdd(
... path_in_repo="remote/dir/weights.h5",
... path_or_fileobj="./local/weights.h5",
... )
CommitOperationAdd(path_in_repo='remote/dir/weights.h5', path_or_fileobj='./local/weights.h5')
>>> with operation.as_file() as file:
... content = file.read()
>>> with operation.as_file(with_tqdm=True) as file:
... while True:
... data = file.read(1024)
... if not data:
... break
config.json: 100%|█████████████████████████| 8.19k/8.19k [00:02<00:00, 3.72kB/s]
>>> with operation.as_file(with_tqdm=True) as file:
... requests.put(..., data=file)
config.json: 100%|█████████████████████████| 8.19k/8.19k [00:02<00:00, 3.72kB/s]
class huggingface_hub.CommitOperationDelete
< source >( path_in_repo: str is_folder: typing.Union[bool, typing.Literal['auto']] = 'auto' )
Parameters
- path_in_repo (
str
) — Relative filepath in the repo, for example:"checkpoints/1fec34a/weights.bin"
for a file or"checkpoints/1fec34a/"
for a folder. - is_folder (
bool
orLiteral["auto"]
, optional) — Whether the Delete Operation applies to a folder or not. If “auto”, the path type (file or folder) is guessed automatically by looking if path ends with a ”/” (folder) or not (file). To explicitly set the path type, you can setis_folder=True
oris_folder=False
.
Data structure holding necessary info to delete a file or a folder from a repository on the Hub.
class huggingface_hub.CommitOperationCopy
< source >( src_path_in_repo: str path_in_repo: str src_revision: typing.Optional[str] = None )
Parameters
- src_path_in_repo (
str
) — Relative filepath in the repo of the file to be copied, e.g."checkpoints/1fec34a/weights.bin"
. - path_in_repo (
str
) — Relative filepath in the repo where to copy the file, e.g."checkpoints/1fec34a/weights_copy.bin"
. - src_revision (
str
, optional) — The git revision of the file to be copied. Can be any valid git revision. Default to the target commit revision.
Data structure holding necessary info to copy a file in a repository on the Hub.
Limitations:
- Only LFS files can be copied. To copy a regular file, you need to download it locally and re-upload it
- Cross-repository copies are not supported.
Note: you can combine a CommitOperationCopy and a CommitOperationDelete to rename an LFS file on the Hub.
CommitScheduler
class huggingface_hub.CommitScheduler
< source >( repo_id: str folder_path: typing.Union[str, pathlib.Path] every: typing.Union[int, float] = 5 path_in_repo: typing.Optional[str] = None repo_type: typing.Optional[str] = None revision: typing.Optional[str] = None private: bool = False token: typing.Optional[str] = None allow_patterns: typing.Union[str, typing.List[str], NoneType] = None ignore_patterns: typing.Union[str, typing.List[str], NoneType] = None squash_history: bool = False hf_api: typing.Optional[ForwardRef('HfApi')] = None )
Parameters
- repo_id (
str
) — The id of the repo to commit to. - folder_path (
str
orPath
) — Path to the local folder to upload regularly. - every (
int
orfloat
, optional) — The number of minutes between each commit. Defaults to 5 minutes. - path_in_repo (
str
, optional) — Relative path of the directory in the repo, for example:"checkpoints/"
. Defaults to the root folder of the repository. - repo_type (
str
, optional) — The type of the repo to commit to. Defaults tomodel
. - revision (
str
, optional) — The revision of the repo to commit to. Defaults tomain
. - private (
bool
, optional) — Whether to make the repo private. Defaults toFalse
. This value is ignored if the repo already exist. - token (
str
, optional) — The token to use to commit to the repo. Defaults to the token saved on the machine. - allow_patterns (
List[str]
orstr
, optional) — If provided, only files matching at least one pattern are uploaded. - ignore_patterns (
List[str]
orstr
, optional) — If provided, files matching any of the patterns are not uploaded. - squash_history (
bool
, optional) — Whether to squash the history of the repo after each commit. Defaults toFalse
. Squashing commits is useful to avoid degraded performances on the repo when it grows too large. - hf_api (
HfApi
, optional) — The HfApi client to use to commit to the Hub. Can be set with custom settings (user agent, token,…).
Scheduler to upload a local folder to the Hub at regular intervals (e.g. push to hub every 5 minutes).
The scheduler is started when instantiated and run indefinitely. At the end of your script, a last commit is triggered. Checkout the upload guide to learn more about how to use it.
Example:
>>> from pathlib import Path
>>> from huggingface_hub import CommitScheduler
# Scheduler uploads every 10 minutes
>>> csv_path = Path("watched_folder/data.csv")
>>> CommitScheduler(repo_id="test_scheduler", repo_type="dataset", folder_path=csv_path.parent, every=10)
>>> with csv_path.open("a") as f:
... f.write("first line")
# Some time later (...)
>>> with csv_path.open("a") as f:
... f.write("second line")
Push folder to the Hub and return the commit info.
This method is not meant to be called directly. It is run in the background by the scheduler, respecting a queue mechanism to avoid concurrent commits. Making a direct call to the method might lead to concurrency issues.
The default behavior of push_to_hub
is to assume an append-only folder. It lists all files in the folder and
uploads only changed files. If no changes are found, the method returns without committing anything. If you want
to change this behavior, you can inherit from CommitScheduler and override this method. This can be useful
for example to compress data together in a single file before committing. For more details and examples, check
out our integration guide.
Stop the scheduler.
A stopped scheduler cannot be restarted. Mostly for tests purposes.
Trigger a push_to_hub
and return a future.
This method is automatically called every every
minutes. You can also call it manually to trigger a commit
immediately, without waiting for the next scheduled commit.
Search helpers
Some helpers to filter repositories on the Hub are available in the huggingface_hub
package.
DatasetFilter
class huggingface_hub.DatasetFilter
< source >( author: typing.Optional[str] = None benchmark: typing.Union[str, typing.List[str], NoneType] = None dataset_name: typing.Optional[str] = None language_creators: typing.Union[str, typing.List[str], NoneType] = None language: typing.Union[str, typing.List[str], NoneType] = None multilinguality: typing.Union[str, typing.List[str], NoneType] = None size_categories: typing.Union[str, typing.List[str], NoneType] = None task_categories: typing.Union[str, typing.List[str], NoneType] = None task_ids: typing.Union[str, typing.List[str], NoneType] = None )
Parameters
- author (
str
, optional) — A string or list of strings that can be used to identify datasets on the Hub by the original uploader (author or organization), such asfacebook
orhuggingface
. - benchmark (
str
orList
, optional) — A string or list of strings that can be used to identify datasets on the Hub by their official benchmark. - dataset_name (
str
, optional) — A string or list of strings that can be used to identify datasets on the Hub by its name, such asSQAC
orwikineural
- language_creators (
str
orList
, optional) — A string or list of strings that can be used to identify datasets on the Hub with how the data was curated, such ascrowdsourced
ormachine_generated
. - language (
str
orList
, optional) — A string or list of strings representing a two-character language to filter datasets by on the Hub. - multilinguality (
str
orList
, optional) — A string or list of strings representing a filter for datasets that contain multiple languages. - size_categories (
str
orList
, optional) — A string or list of strings that can be used to identify datasets on the Hub by the size of the dataset such as100K<n<1M
or1M<n<10M
. - task_categories (
str
orList
, optional) — A string or list of strings that can be used to identify datasets on the Hub by the designed task, such asaudio_classification
ornamed_entity_recognition
. - task_ids (
str
orList
, optional) — A string or list of strings that can be used to identify datasets on the Hub by the specific task such asspeech_emotion_recognition
orparaphrase
.
A class that converts human-readable dataset search parameters into ones compatible with the REST API. For all parameters capitalization does not matter.
Examples:
>>> from huggingface_hub import DatasetFilter
>>> # Using author
>>> new_filter = DatasetFilter(author="facebook")
>>> # Using benchmark
>>> new_filter = DatasetFilter(benchmark="raft")
>>> # Using dataset_name
>>> new_filter = DatasetFilter(dataset_name="wikineural")
>>> # Using language_creator
>>> new_filter = DatasetFilter(language_creator="crowdsourced")
>>> # Using language
>>> new_filter = DatasetFilter(language="en")
>>> # Using multilinguality
>>> new_filter = DatasetFilter(multilinguality="multilingual")
>>> # Using size_categories
>>> new_filter = DatasetFilter(size_categories="100K<n<1M")
>>> # Using task_categories
>>> new_filter = DatasetFilter(task_categories="audio_classification")
>>> # Using task_ids
>>> new_filter = DatasetFilter(task_ids="paraphrase")
ModelFilter
class huggingface_hub.ModelFilter
< source >( author: typing.Optional[str] = None library: typing.Union[str, typing.List[str], NoneType] = None language: typing.Union[str, typing.List[str], NoneType] = None model_name: typing.Optional[str] = None task: typing.Union[str, typing.List[str], NoneType] = None trained_dataset: typing.Union[str, typing.List[str], NoneType] = None tags: typing.Union[str, typing.List[str], NoneType] = None )
Parameters
- author (
str
, optional) — A string that can be used to identify models on the Hub by the original uploader (author or organization), such asfacebook
orhuggingface
. - library (
str
orList
, optional) — A string or list of strings of foundational libraries models were originally trained from, such as pytorch, tensorflow, or allennlp. - language (
str
orList
, optional) — A string or list of strings of languages, both by name and country code, such as “en” or “English” - model_name (
str
, optional) — A string that contain complete or partial names for models on the Hub, such as “bert” or “bert-base-cased” - task (
str
orList
, optional) — A string or list of strings of tasks models were designed for, such as: “fill-mask” or “automatic-speech-recognition” - tags (
str
orList
, optional) — A string tag or a list of tags to filter models on the Hub by, such astext-generation
orspacy
. - trained_dataset (
str
orList
, optional) — A string tag or a list of string tags of the trained dataset for a model on the Hub.
A class that converts human-readable model search parameters into ones compatible with the REST API. For all parameters capitalization does not matter.
>>> from huggingface_hub import ModelFilter
>>> # For the author_or_organization
>>> new_filter = ModelFilter(author_or_organization="facebook")
>>> # For the library
>>> new_filter = ModelFilter(library="pytorch")
>>> # For the language
>>> new_filter = ModelFilter(language="french")
>>> # For the model_name
>>> new_filter = ModelFilter(model_name="bert")
>>> # For the task
>>> new_filter = ModelFilter(task="text-classification")
>>> from huggingface_hub import HfApi
>>> api = HfApi()
# To list model tags
>>> new_filter = ModelFilter(tags="benchmark:raft")
>>> # Related to the dataset
>>> new_filter = ModelFilter(trained_dataset="common_voice")