HfApi Client
Below is the documentation for the HfApi
class, which serves as a Python wrapper for the Hugging Face Hub’s API.
All methods from the HfApi
are also accessible from the package’s root directly. Both approaches are detailed below.
Using the root method is more straightforward but the HfApi class gives you more flexibility.
In particular, you can pass a token that will be reused in all HTTP calls. This is different
than huggingface-cli login
or login() as the token is not persisted on the machine.
It is also possible to provide a different endpoint or configure a custom user-agent.
from huggingface_hub import HfApi, list_models
# Use root method
models = list_models()
# Or configure a HfApi client
hf_api = HfApi(
endpoint="https://huggingface.co", # Can be a Private Hub endpoint.
token="hf_xxx", # Token is not persisted on the machine.
)
models = hf_api.list_models()
HfApi
class huggingface_hub.HfApi
< source >( endpoint: Optional[str] = None token: Optional[str] = None library_name: Optional[str] = None library_version: Optional[str] = None user_agent: Union[Dict, str, None] = None )
add_space_secret
< source >( repo_id: str key: str value: str description: Optional[str] = None token: Optional[str] = None )
Parameters
-
repo_id (
str
) — ID of the repo to update. Example:"bigcode/in-the-stack"
. -
key (
str
) — Secret key. Example:"GITHUB_API_KEY"
-
value (
str
) — Secret value. Example:"your_github_api_key"
. -
description (
str
, optional) — Secret description. Example:"Github API key to access the Github API"
. -
token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Adds or updates a secret in a Space.
Secrets allow to set secret keys or tokens to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.
add_space_variable
< source >( repo_id: str key: str value: str description: Optional[str] = None token: Optional[str] = None )
Parameters
-
repo_id (
str
) — ID of the repo to update. Example:"bigcode/in-the-stack"
. -
key (
str
) — Variable key. Example:"MODEL_REPO_ID"
-
value (
str
) — Variable value. Example:"the_model_repo_id"
. -
description (
str
) — Description of the variable. Example:"Model Repo ID of the implemented model"
. -
token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Adds or updates a variable in a Space.
Variables allow to set environment variables to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables
change_discussion_status
< source >( repo_id: str discussion_num: int new_status: Literal[('open', 'closed')] token: Optional[str] = None comment: Optional[str] = None repo_type: Optional[str] = None ) → DiscussionStatusChange
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
discussion_num (
int
) — The number of the Discussion or Pull Request . Must be a strictly positive integer. -
new_status (
str
) — The new status for the discussion, either"open"
or"closed"
. -
comment (
str
, optional) — An optional comment to post with the status change. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. -
token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token)
Returns
the status change event
Closes or re-opens a Discussion or Pull Request.
Examples:
>>> new_title = "New title, fixing a typo"
>>> HfApi().rename_discussion(
... repo_id="username/repo_name",
... discussion_num=34
... new_title=new_title
... )
# DiscussionStatusChange(id='deadbeef0000000', type='status-change', ...)
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
comment_discussion
< source >( repo_id: str discussion_num: int comment: str token: Optional[str] = None repo_type: Optional[str] = None ) → DiscussionComment
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
discussion_num (
int
) — The number of the Discussion or Pull Request . Must be a strictly positive integer. -
comment (
str
) — The content of the comment to create. Comments support markdown formatting. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. -
token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token)
Returns
the newly created comment
Creates a new comment on the given Discussion.
Examples:
>>> comment = """
... Hello @otheruser!
...
... # This is a title
...
... **This is bold**, *this is italic* and ~this is strikethrough~
... And [this](http://url) is a link
... """
>>> HfApi().comment_discussion(
... repo_id="username/repo_name",
... discussion_num=34
... comment=comment
... )
# DiscussionComment(id='deadbeef0000000', type='comment', ...)
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
create_branch
< source >( repo_id: str branch: str revision: Optional[str] = None token: Optional[str] = None repo_type: Optional[str] = None exist_ok: bool = False )
Parameters
-
repo_id (
str
) — The repository in which the branch will be created. Example:"user/my-cool-model"
. -
branch (
str
) — The name of the branch to create. -
revision (
str
, optional) — The git revision to create the branch from. It can be a branch name or the OID/SHA of a commit, as a hexadecimal string. Defaults to the head of the"main"
branch. -
token (
str
, optional) — Authentication token. Will default to the stored token. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if creating a branch on a dataset or space,None
or"model"
if tagging a model. Default isNone
. -
exist_ok (
bool
, optional, defaults toFalse
) — IfTrue
, do not raise an error if branch already exists.
Raises
RepositoryNotFoundError or BadRequestError or HfHubHTTPError
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
- BadRequestError —
If invalid reference for a branch. Ex:
refs/pr/5
or ‘refs/foo/bar’. - HfHubHTTPError —
If the branch already exists on the repo (error 409) and
exist_ok
is set toFalse
.
Create a new branch for a repo on the Hub, starting from the specified revision (defaults to main
).
To find a revision suiting your needs, you can use list_repo_refs() or list_repo_commits().
create_commit
< source >(
repo_id: str
operations: Iterable[CommitOperation]
commit_message: str
commit_description: Optional[str] = None
token: Optional[str] = None
repo_type: Optional[str] = None
revision: Optional[str] = None
create_pr: Optional[bool] = None
num_threads: int = 5
parent_commit: Optional[str] = None
run_as_future: bool = False
)
→
CommitInfo or Future
Parameters
-
repo_id (
str
) — The repository in which the commit will be created, for example:"username/custom_transformers"
-
operations (
Iterable
ofCommitOperation()
) — An iterable of operations to include in the commit, either:- CommitOperationAdd to upload a file
- CommitOperationDelete to delete a file
- CommitOperationCopy to copy a file
-
commit_message (
str
) — The summary (first line) of the commit that will be created. -
commit_description (
str
, optional) — The description of the commit that will be created -
token (
str
, optional) — Authentication token, obtained withHfApi.login
method. Will default to the stored token. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. -
revision (
str
, optional) — The git revision to commit from. Defaults to the head of the"main"
branch. -
create_pr (
boolean
, optional) — Whether or not to create a Pull Request with that commit. Defaults toFalse
. Ifrevision
is not set, PR is opened against the"main"
branch. Ifrevision
is set and is a branch, PR is opened against this branch. Ifrevision
is set and is not a branch name (example: a commit oid), anRevisionNotFoundError
is returned by the server. -
num_threads (
int
, optional) — Number of concurrent threads for uploading files. Defaults to 5. Setting it to 2 means at most 2 files will be uploaded concurrently. -
parent_commit (
str
, optional) — The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. If specified andcreate_pr
isFalse
, the commit will fail ifrevision
does not point toparent_commit
. If specified andcreate_pr
isTrue
, the pull request will be created fromparent_commit
. Specifyingparent_commit
ensures the repo has not changed before committing the changes, and can be especially useful if the repo is updated / committed to concurrently. -
run_as_future (
bool
, optional) — Whether or not to run this method in the background. Background jobs are run sequentially without blocking the main thread. Passingrun_as_future=True
will return a Future object. Defaults toFalse
.
Returns
CommitInfo or Future
Instance of CommitInfo containing information about the newly created commit (commit hash, commit
url, pr url, commit message,…). If run_as_future=True
is passed, returns a Future object which will
contain the result when executed.
Raises
ValueError
or RepositoryNotFoundError
ValueError
— If commit message is empty.ValueError
— If parent commit is not a valid commit OID.ValueError
— If the Hub API returns an HTTP 400 error (bad request)ValueError
— Ifcreate_pr
isTrue
and revision is neitherNone
nor"main"
.- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
Creates a commit in the given repo, deleting & uploading files as needed.
create_commit
assumes that the repo already exists on the Hub. If you get a
Client error 404, please make sure you are authenticated and that repo_id
and
repo_type
are set correctly. If repo does not exist, create it first using
create_repo().
create_commit
is limited to 25k LFS files and a 1GB payload for regular files.
create_commits_on_pr
< source >(
repo_id: str
addition_commits: List[List[CommitOperationAdd]]
deletion_commits: List[List[CommitOperationDelete]]
commit_message: str
commit_description: Optional[str] = None
token: Optional[str] = None
repo_type: Optional[str] = None
merge_pr: bool = True
num_threads: int = 5
verbose: bool = False
)
→
str
Parameters
-
repo_id (
str
) — The repository in which the commits will be pushed. Example:"username/my-cool-model"
. -
addition_commits (
List
ofList
of CommitOperationAdd) — A list containing lists of CommitOperationAdd. Each sublist will result in a commit on the PR.deletion_commits — A list containing lists of CommitOperationDelete. Each sublist will result in a commit on the PR. Deletion commits are pushed before addition commits.
-
commit_message (
str
) — The summary (first line) of the commit that will be created. Will also be the title of the PR. -
commit_description (
str
, optional) — The description of the commit that will be created. The description will be added to the PR. -
token (
str
, optional) — Authentication token, obtained withHfApi.login
method. Will default to the stored token. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. -
merge_pr (
bool
) — If set toTrue
, the Pull Request is merged at the end of the process. Defaults toTrue
. -
num_threads (
int
, optional) — Number of concurrent threads for uploading files. Defaults to 5. -
verbose (
bool
) — If set toTrue
, process will run on verbose mode i.e. print information about the ongoing tasks. Defaults toFalse
.
Returns
str
URL to the created PR.
Raises
MultiCommitException
MultiCommitException
— If an unexpected issue occur in the process: empty commits, unexpected commits in a PR, unexpected PR description, etc.
Push changes to the Hub in multiple commits.
Commits are pushed to a draft PR branch. If the upload fails or gets interrupted, it can be resumed. Progress
is tracked in the PR description. At the end of the process, the PR is set as open and the title is updated to
match the initial commit message. If merge_pr=True
is passed, the PR is merged automatically.
All deletion commits are pushed first, followed by the addition commits. The order of the commits is not guaranteed as we might implement parallel commits in the future. Be sure that your are not updating several times the same file.
create_commits_on_pr
is experimental. Its API and behavior is subject to change in the future without prior notice.
Example:
>>> from huggingface_hub import HfApi, plan_multi_commits
>>> addition_commits, deletion_commits = plan_multi_commits(
... operations=[
... CommitOperationAdd(...),
... CommitOperationAdd(...),
... CommitOperationDelete(...),
... CommitOperationDelete(...),
... CommitOperationAdd(...),
... ],
... )
>>> HfApi().create_commits_on_pr(
... repo_id="my-cool-model",
... addition_commits=addition_commits,
... deletion_commits=deletion_commits,
... (...)
... verbose=True,
... )
create_commits_on_pr
assumes that the repo already exists on the Hub. If you get a Client error 404, please
make sure you are authenticated and that repo_id
and repo_type
are set correctly. If repo does not exist,
create it first using create_repo().
create_discussion
< source >( repo_id: str title: str token: Optional[str] = None description: Optional[str] = None repo_type: Optional[str] = None pull_request: bool = False )
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
title (
str
) — The title of the discussion. It can be up to 200 characters long, and must be at least 3 characters long. Leading and trailing whitespaces will be stripped. -
token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token) -
description (
str
, optional) — An optional description for the Pull Request. Defaults to"Discussion opened with the huggingface_hub Python library"
-
pull_request (
bool
, optional) — Whether to create a Pull Request or discussion. IfTrue
, creates a Pull Request. IfFalse
, creates a discussion. Defaults toFalse
. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
.
Creates a Discussion or Pull Request.
Pull Requests created programmatically will be in "draft"
status.
Creating a Pull Request with changes can also be done at once with HfApi.create_commit().
Returns: DiscussionWithDetails
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
create_pull_request
< source >( repo_id: str title: str token: Optional[str] = None description: Optional[str] = None repo_type: Optional[str] = None )
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
title (
str
) — The title of the discussion. It can be up to 200 characters long, and must be at least 3 characters long. Leading and trailing whitespaces will be stripped. -
token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token) -
description (
str
, optional) — An optional description for the Pull Request. Defaults to"Discussion opened with the huggingface_hub Python library"
-
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
.
Creates a Pull Request . Pull Requests created programmatically will be in "draft"
status.
Creating a Pull Request with changes can also be done at once with HfApi.create_commit();
This is a wrapper around HfApi.create_discussion().
Returns: DiscussionWithDetails
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
create_repo
< source >( repo_id: str token: Optional[str] = None private: bool = False repo_type: Optional[str] = None exist_ok: bool = False space_sdk: Optional[str] = None space_hardware: Optional[SpaceHardware] = None space_storage: Optional[SpaceStorage] = None space_sleep_time: Optional[int] = None space_secrets: Optional[List[Dict[str, str]]] = None space_variables: Optional[List[Dict[str, str]]] = None ) → RepoUrl
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token) -
private (
bool
, optional, defaults toFalse
) — Whether the model repo should be private. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. -
exist_ok (
bool
, optional, defaults toFalse
) — IfTrue
, do not raise an error if repo already exists. -
space_sdk (
str
, optional) — Choice of SDK to use if repo_type is “space”. Can be “streamlit”, “gradio”, “docker”, or “static”. -
space_hardware (
SpaceHardware
orstr
, optional) — Choice of Hardware if repo_type is “space”. See SpaceHardware for a complete list. -
space_storage (
SpaceStorage
orstr
, optional) — Choice of persistent storage tier. Example:"small"
. See SpaceStorage for a complete list. -
space_sleep_time (
int
, optional) — Number of seconds of inactivity to wait before a Space is put to sleep. Set to-1
if you don’t want your Space to sleep (default behavior for upgraded hardware). For free hardware, you can’t configure the sleep time (value is fixed to 48 hours of inactivity). See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details. -
space_secrets (
List[Dict[str, str]]
, optional) — A list of secret keys to set in your Space. Each item is in the form{"key": ..., "value": ..., "description": ...}
where description is optional. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets. -
space_variables (
List[Dict[str, str]]
, optional) — A list of public environment variables to set in your Space. Each item is in the form{"key": ..., "value": ..., "description": ...}
where description is optional. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables.
Returns
URL to the newly created repo. Value is a subclass of str
containing
attributes like endpoint
, repo_type
and repo_id
.
Create an empty repo on the HuggingFace Hub.
create_tag
< source >( repo_id: str tag: str tag_message: Optional[str] = None revision: Optional[str] = None token: Optional[str] = None repo_type: Optional[str] = None exist_ok: bool = False )
Parameters
-
repo_id (
str
) — The repository in which a commit will be tagged. Example:"user/my-cool-model"
. -
tag (
str
) — The name of the tag to create. -
tag_message (
str
, optional) — The description of the tag to create. -
revision (
str
, optional) — The git revision to tag. It can be a branch name or the OID/SHA of a commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. Defaults to the head of the"main"
branch. -
token (
str
, optional) — Authentication token. Will default to the stored token. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if tagging a dataset or space,None
or"model"
if tagging a model. Default isNone
. -
exist_ok (
bool
, optional, defaults toFalse
) — IfTrue
, do not raise an error if tag already exists.
Raises
RepositoryNotFoundError or RevisionNotFoundError or HfHubHTTPError
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
- RevisionNotFoundError — If revision is not found (error 404) on the repo.
- HfHubHTTPError —
If the branch already exists on the repo (error 409) and
exist_ok
is set toFalse
.
Tag a given commit of a repo on the Hub.
dataset_info
< source >( repo_id: str revision: Optional[str] = None timeout: Optional[float] = None files_metadata: bool = False token: Optional[Union[bool, str]] = None ) → hf_api.DatasetInfo
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
revision (
str
, optional) — The revision of the dataset repository from which to get the information. -
timeout (
float
, optional) — Whether to set a timeout for the request to the Hub. -
files_metadata (
bool
, optional) — Whether or not to retrieve metadata for files in the repository (size, LFS metadata, etc). Defaults toFalse
. -
token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
The dataset repository information.
Get info on one specific dataset on huggingface.co.
Dataset can be private if you pass an acceptable token.
Raises the following errors:
- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access. - RevisionNotFoundError If the revision to download from cannot be found.
delete_branch
< source >( repo_id: str branch: str token: Optional[str] = None repo_type: Optional[str] = None )
Parameters
-
repo_id (
str
) — The repository in which a branch will be deleted. Example:"user/my-cool-model"
. -
branch (
str
) — The name of the branch to delete. -
token (
str
, optional) — Authentication token. Will default to the stored token. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if creating a branch on a dataset or space,None
or"model"
if tagging a model. Default isNone
.
Raises
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
- HfHubHTTPError —
If trying to delete a protected branch. Ex:
main
cannot be deleted. - HfHubHTTPError — If trying to delete a branch that does not exist.
Delete a branch from a repo on the Hub.
delete_file
< source >( path_in_repo: str repo_id: str token: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None commit_message: Optional[str] = None commit_description: Optional[str] = None create_pr: Optional[bool] = None parent_commit: Optional[str] = None )
Parameters
-
path_in_repo (
str
) — Relative filepath in the repo, for example:"checkpoints/1fec34a/weights.bin"
-
repo_id (
str
) — The repository from which the file will be deleted, for example:"username/custom_transformers"
-
token (
str
, optional) — Authentication token, obtained withHfApi.login
method. Will default to the stored token. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if the file is in a dataset or space,None
or"model"
if in a model. Default isNone
. -
revision (
str
, optional) — The git revision to commit from. Defaults to the head of the"main"
branch. -
commit_message (
str
, optional) — The summary / title / first line of the generated commit. Defaults tof"Delete {path_in_repo} with huggingface_hub"
. -
commit_description (
str
optional) — The description of the generated commit -
create_pr (
boolean
, optional) — Whether or not to create a Pull Request with that commit. Defaults toFalse
. Ifrevision
is not set, PR is opened against the"main"
branch. Ifrevision
is set and is a branch, PR is opened against this branch. Ifrevision
is set and is not a branch name (example: a commit oid), anRevisionNotFoundError
is returned by the server. -
parent_commit (
str
, optional) — The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. If specified andcreate_pr
isFalse
, the commit will fail ifrevision
does not point toparent_commit
. If specified andcreate_pr
isTrue
, the pull request will be created fromparent_commit
. Specifyingparent_commit
ensures the repo has not changed before committing the changes, and can be especially useful if the repo is updated / committed to concurrently.
Deletes a file in the given repo.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access. - RevisionNotFoundError If the revision to download from cannot be found.
- EntryNotFoundError If the file to download cannot be found.
delete_folder
< source >( path_in_repo: str repo_id: str token: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None commit_message: Optional[str] = None commit_description: Optional[str] = None create_pr: Optional[bool] = None parent_commit: Optional[str] = None )
Parameters
-
path_in_repo (
str
) — Relative folder path in the repo, for example:"checkpoints/1fec34a"
. -
repo_id (
str
) — The repository from which the folder will be deleted, for example:"username/custom_transformers"
-
token (
str
, optional) — Authentication token, obtained withHfApi.login
method. Will default to the stored token. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if the folder is in a dataset or space,None
or"model"
if in a model. Default isNone
. -
revision (
str
, optional) — The git revision to commit from. Defaults to the head of the"main"
branch. -
commit_message (
str
, optional) — The summary / title / first line of the generated commit. Defaults tof"Delete folder {path_in_repo} with huggingface_hub"
. -
commit_description (
str
optional) — The description of the generated commit. -
create_pr (
boolean
, optional) — Whether or not to create a Pull Request with that commit. Defaults toFalse
. Ifrevision
is not set, PR is opened against the"main"
branch. Ifrevision
is set and is a branch, PR is opened against this branch. Ifrevision
is set and is not a branch name (example: a commit oid), anRevisionNotFoundError
is returned by the server. -
parent_commit (
str
, optional) — The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. If specified andcreate_pr
isFalse
, the commit will fail ifrevision
does not point toparent_commit
. If specified andcreate_pr
isTrue
, the pull request will be created fromparent_commit
. Specifyingparent_commit
ensures the repo has not changed before committing the changes, and can be especially useful if the repo is updated / committed to concurrently.
Deletes a folder in the given repo.
Simple wrapper around create_commit() method.
delete_repo
< source >( repo_id: str token: Optional[str] = None repo_type: Optional[str] = None missing_ok: bool = False )
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token) -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. -
missing_ok (
bool
, optional, defaults toFalse
) — IfTrue
, do not raise an error if repo does not exist.
Raises
- — RepositoryNotFoundError
If the repository to delete from cannot be found and
missing_ok
is set to False (default).
- — RepositoryNotFoundError
If the repository to delete from cannot be found and
Delete a repo from the HuggingFace Hub. CAUTION: this is irreversible.
delete_space_secret
< source >( repo_id: str key: str token: Optional[str] = None )
Deletes a secret from a Space.
Secrets allow to set secret keys or tokens to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.
delete_space_storage
< source >( repo_id: str token: Optional[str] = None ) → SpaceRuntime
Parameters
-
repo_id (
str
) — ID of the Space to update. Example:"HuggingFaceH4/open_llm_leaderboard"
. -
token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Returns
Runtime information about a Space including Space stage and hardware.
Raises
BadRequestError
BadRequestError
— If space has no persistent storage.
Delete persistent storage for a Space.
delete_space_variable
< source >( repo_id: str key: str token: Optional[str] = None )
Deletes a variable from a Space.
Variables allow to set environment variables to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables
delete_tag
< source >( repo_id: str tag: str token: Optional[str] = None repo_type: Optional[str] = None )
Parameters
-
repo_id (
str
) — The repository in which a tag will be deleted. Example:"user/my-cool-model"
. -
tag (
str
) — The name of the tag to delete. -
token (
str
, optional) — Authentication token. Will default to the stored token. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if tagging a dataset or space,None
or"model"
if tagging a model. Default isNone
.
Raises
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
- RevisionNotFoundError — If tag is not found.
Delete a tag from a repo on the Hub.
duplicate_space
< source >( from_id: str to_id: Optional[str] = None private: Optional[bool] = None token: Optional[str] = None exist_ok: bool = False hardware: Optional[SpaceHardware] = None storage: Optional[SpaceStorage] = None sleep_time: Optional[int] = None secrets: Optional[List[Dict[str, str]]] = None variables: Optional[List[Dict[str, str]]] = None ) → RepoUrl
Parameters
-
from_id (
str
) — ID of the Space to duplicate. Example:"pharma/CLIP-Interrogator"
. -
to_id (
str
, optional) — ID of the new Space. Example:"dog/CLIP-Interrogator"
. If not provided, the new Space will have the same name as the original Space, but in your account. -
private (
bool
, optional) — Whether the new Space should be private or not. Defaults to the same privacy as the original Space. -
token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided. -
exist_ok (
bool
, optional, defaults toFalse
) — IfTrue
, do not raise an error if repo already exists. -
hardware (
SpaceHardware
orstr
, optional) — Choice of Hardware. Example:"t4-medium"
. See SpaceHardware for a complete list. -
storage (
SpaceStorage
orstr
, optional) — Choice of persistent storage tier. Example:"small"
. See SpaceStorage for a complete list. -
sleep_time (
int
, optional) — Number of seconds of inactivity to wait before a Space is put to sleep. Set to-1
if you don’t want your Space to sleep (default behavior for upgraded hardware). For free hardware, you can’t configure the sleep time (value is fixed to 48 hours of inactivity). See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details. -
secrets (
List[Dict[str, str]]
, optional) — A list of secret keys to set in your Space. Each item is in the form{"key": ..., "value": ..., "description": ...}
where description is optional. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets. -
variables (
List[Dict[str, str]]
, optional) — A list of public environment variables to set in your Space. Each item is in the form{"key": ..., "value": ..., "description": ...}
where description is optional. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables.
Returns
URL to the newly created repo. Value is a subclass of str
containing
attributes like endpoint
, repo_type
and repo_id
.
Raises
- —
HTTPError
if the HuggingFace API returned an error
- —
- — RepositoryNotFoundError
If one of
from_id
orto_id
cannot be found. This may be because it doesn’t exist, or because it is set toprivate
and you do not have access.
- — RepositoryNotFoundError
If one of
Duplicate a Space.
Programmatically duplicate a Space. The new Space will be created in your account and will be in the same state as the original Space (running or paused). You can duplicate a Space no matter the current state of a Space.
Example:
>>> from huggingface_hub import duplicate_space
# Duplicate a Space to your account
>>> duplicate_space("multimodalart/dreambooth-training")
RepoUrl('https://huggingface.co/spaces/nateraw/dreambooth-training',...)
# Can set custom destination id and visibility flag.
>>> duplicate_space("multimodalart/dreambooth-training", to_id="my-dreambooth", private=True)
RepoUrl('https://huggingface.co/spaces/nateraw/my-dreambooth',...)
edit_discussion_comment
< source >( repo_id: str discussion_num: int comment_id: str new_content: str token: Optional[str] = None repo_type: Optional[str] = None ) → DiscussionComment
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
discussion_num (
int
) — The number of the Discussion or Pull Request . Must be a strictly positive integer. -
comment_id (
str
) — The ID of the comment to edit. -
new_content (
str
) — The new content of the comment. Comments support markdown formatting. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. -
token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token)
Returns
the edited comment
Edits a comment on a Discussion / Pull Request.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
file_exists
< source >( repo_id: str filename: str repo_type: Optional[str] = None revision: Optional[str] = None token: Optional[str] = None )
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
filename (
str
) — The name of the file to check, for example:"config.json"
-
repo_type (
str
, optional) — Set to"dataset"
or"space"
if getting repository info from a dataset or a space,None
or"model"
if getting repository info from a model. Default isNone
. -
revision (
str
, optional) — The revision of the repository from which to get the information. Defaults to"main"
branch. -
token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Checks if a file exists in a repository on the Hugging Face Hub.
List all valid dataset tags as a nested namespace object.
get_discussion_details
< source >( repo_id: str discussion_num: int repo_type: Optional[str] = None token: Optional[str] = None )
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
discussion_num (
int
) — The number of the Discussion or Pull Request . Must be a strictly positive integer. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. -
token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token)
Fetches a Discussion’s / Pull Request ‘s details from the Hub.
Returns: DiscussionWithDetails
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
get_full_repo_name
< source >(
model_id: str
organization: Optional[str] = None
token: Optional[Union[bool, str]] = None
)
→
str
Parameters
-
model_id (
str
) — The name of the model. -
organization (
str
, optional) — If passed, the repository name will be in the organization namespace instead of the user namespace. -
token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
str
The repository name in the user’s namespace ({username}/{model_id}) if no organization is passed, and under the organization namespace ({organization}/{model_id}) otherwise.
Returns the repository name for a given model ID and optional organization.
List all valid model tags as a nested namespace object
get_repo_discussions
< source >(
repo_id: str
repo_type: Optional[str] = None
token: Optional[str] = None
)
→
Iterator[Discussion]
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if fetching from a dataset or space,None
or"model"
if fetching from a model. Default isNone
. -
token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token).
Returns
Iterator[Discussion]
An iterator of Discussion objects.
Fetches Discussions and Pull Requests for the given repo.
Example:
get_space_runtime
< source >( repo_id: str token: Optional[str] = None ) → SpaceRuntime
Gets runtime information about a Space.
get_space_variables
< source >( repo_id: str token: Optional[str] = None )
Gets all variables from a Space.
Variables allow to set environment variables to a Space without hardcoding them. For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables
get_token_permission
< source >(
token: Optional[str] = None
)
→
Literal["read", "write", None]
Check if a given token
is valid and return its permissions.
For more details about tokens, please refer to https://huggingface.co/docs/hub/security-tokens#what-are-user-access-tokens.
hf_hub_download
< source >( repo_id: str filename: str subfolder: Optional[str] = None repo_type: Optional[str] = None revision: Optional[str] = None cache_dir: Union[str, Path, None] = None local_dir: Union[str, Path, None] = None local_dir_use_symlinks: Union[bool, Literal['auto']] = 'auto' force_download: bool = False force_filename: Optional[str] = None proxies: Optional[Dict] = None etag_timeout: float = 10 resume_download: bool = False local_files_only: bool = False legacy_cache_layout: bool = False )
Parameters
-
repo_id (
str
) — A user or an organization name and a repo name separated by a/
. -
filename (
str
) — The name of the file in the repo. -
subfolder (
str
, optional) — An optional value corresponding to a folder inside the model repo. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if downloading from a dataset or space,None
or"model"
if downloading from a model. Default isNone
. -
revision (
str
, optional) — An optional Git revision id which can be a branch name, a tag, or a commit hash. -
endpoint (
str
, optional) — Hugging Face Hub base url. Will default to https://huggingface.co/. Otherwise, one can set theHF_ENDPOINT
environment variable. -
cache_dir (
str
,Path
, optional) — Path to the folder where cached files are stored. -
local_dir (
str
orPath
, optional) — If provided, the downloaded file will be placed under this directory, either as a symlink (default) or a regular file (see description for more details). -
local_dir_use_symlinks (
"auto"
orbool
, defaults to"auto"
) — To be used withlocal_dir
. If set to “auto”, the cache directory will be used and the file will be either duplicated or symlinked to the local directory depending on its size. It set toTrue
, a symlink will be created, no matter the file size. If set toFalse
, the file will either be duplicated from cache (if already exists) or downloaded from the Hub and not cached. See description for more details. -
force_download (
bool
, optional, defaults toFalse
) — Whether the file should be downloaded even if it already exists in the local cache. -
proxies (
dict
, optional) — Dictionary mapping protocol to the URL of the proxy passed torequests.request
. -
etag_timeout (
float
, optional, defaults to10
) — When fetching ETag, how many seconds to wait for the server to send data before giving up which is passed torequests.request
. -
resume_download (
bool
, optional, defaults toFalse
) — IfTrue
, resume a previously interrupted download. -
local_files_only (
bool
, optional, defaults toFalse
) — IfTrue
, avoid downloading the file and return the path to the local cached file if it exists. -
legacy_cache_layout (
bool
, optional, defaults toFalse
) — IfTrue
, uses the legacy file cache layout i.e. just call hf_hub_url() thencached_download
. This is deprecated as the new cache layout is more powerful.
Download a given file if it’s not already present in the local cache.
The new cache file layout looks like this:
- The cache directory contains one subfolder per repo_id (namespaced by repo type)
- inside each repo folder:
- refs is a list of the latest known revision => commit_hash pairs
- blobs contains the actual file blobs (identified by their git-sha or sha256, depending on whether they’re LFS files or not)
- snapshots contains one subfolder per commit, each “commit” contains the subset of the files that have been resolved at that particular commit. Each filename is a symlink to the blob at that particular commit.
If local_dir
is provided, the file structure from the repo will be replicated in this location. You can configure
how you want to move those files:
- If
local_dir_use_symlinks="auto"
(default), files are downloaded and stored in the cache directory as blob files. Small files (<5MB) are duplicated inlocal_dir
while a symlink is created for bigger files. The goal is to be able to manually edit and save small files without corrupting the cache while saving disk space for binary files. The 5MB threshold can be configured with theHF_HUB_LOCAL_DIR_AUTO_SYMLINK_THRESHOLD
environment variable. - If
local_dir_use_symlinks=True
, files are downloaded, stored in the cache directory and symlinked inlocal_dir
. This is optimal in term of disk usage but files must not be manually edited. - If
local_dir_use_symlinks=False
and the blob files exist in the cache directory, they are duplicated in the local dir. This means disk usage is not optimized. - Finally, if
local_dir_use_symlinks=False
and the blob files do not exist in the cache directory, then the files are downloaded and directly placed underlocal_dir
. This means if you need to download them again later, they will be re-downloaded entirely.
[ 96] .
└── [ 160] models--julien-c--EsperBERTo-small
├── [ 160] blobs
│ ├── [321M] 403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
│ ├── [ 398] 7cb18dc9bafbfcf74629a4b760af1b160957a83e
│ └── [1.4K] d7edf6bd2a681fb0175f7735299831ee1b22b812
├── [ 96] refs
│ └── [ 40] main
└── [ 128] snapshots
├── [ 128] 2439f60ef33a0d46d85da5001d52aeda5b00ce9f
│ ├── [ 52] README.md -> ../../blobs/d7edf6bd2a681fb0175f7735299831ee1b22b812
│ └── [ 76] pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
└── [ 128] bbc77c8132af1cc5cf678da3f1ddf2de43606d48
├── [ 52] README.md -> ../../blobs/7cb18dc9bafbfcf74629a4b760af1b160957a83e
└── [ 76] pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
Raises the following errors:
EnvironmentError
iftoken=True
and the token cannot be found.OSError
if ETag cannot be determined.ValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access. - RevisionNotFoundError If the revision to download from cannot be found.
- EntryNotFoundError If the file to download cannot be found.
- LocalEntryNotFoundError If network is disabled or unavailable and file is not found in cache.
hide_discussion_comment
< source >( repo_id: str discussion_num: int comment_id: str token: Optional[str] = None repo_type: Optional[str] = None ) → DiscussionComment
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
discussion_num (
int
) — The number of the Discussion or Pull Request . Must be a strictly positive integer. -
comment_id (
str
) — The ID of the comment to edit. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. -
token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token)
Returns
the hidden comment
Hides a comment on a Discussion / Pull Request.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
like
< source >( repo_id: str token: Optional[str] = None repo_type: Optional[str] = None )
Parameters
-
repo_id (
str
) — The repository to like. Example:"user/my-cool-model"
. -
token (
str
, optional) — Authentication token. Will default to the stored token. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if liking a dataset or space,None
or"model"
if liking a model. Default isNone
.
Raises
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
Like a given repo on the Hub (e.g. set as favorite).
See also unlike() and list_liked_repos().
list_datasets
< source >(
filter: Union[DatasetFilter, str, Iterable[str], None] = None
author: Optional[str] = None
search: Optional[str] = None
sort: Union[Literal['lastModified'], str, None] = None
direction: Optional[Literal[-1]] = None
limit: Optional[int] = None
full: Optional[bool] = None
token: Optional[str] = None
)
→
Iterable[DatasetInfo]
Parameters
-
filter (DatasetFilter or
str
orIterable
, optional) — A string or DatasetFilter which can be used to identify datasets on the hub. -
author (
str
, optional) — A string which identify the author of the returned datasets. -
search (
str
, optional) — A string that will be contained in the returned datasets. -
sort (
Literal["lastModified"]
orstr
, optional) — The key with which to sort the resulting datasets. Possible values are the properties of the huggingface_hub.hf_api.DatasetInfo class. -
direction (
Literal[-1]
orint
, optional) — Direction in which to sort. The value-1
sorts by descending order while all other values sort by ascending order. -
limit (
int
, optional) — The limit on the number of datasets fetched. Leaving this option toNone
fetches all datasets. -
full (
bool
, optional) — Whether to fetch all dataset data, including thelastModified
and thecardData
. Can contain useful information such as the PapersWithCode ID. -
token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
Iterable[DatasetInfo]
an iterable of huggingface_hub.hf_api.DatasetInfo objects.
List datasets hosted on the Huggingface Hub, given some filters.
Example usage with the filter
argument:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> # List all datasets
>>> api.list_datasets()
>>> # Get all valid search arguments
>>> args = DatasetSearchArguments()
>>> # List only the text classification datasets
>>> api.list_datasets(filter="task_categories:text-classification")
>>> # Using the `DatasetFilter`
>>> filt = DatasetFilter(task_categories="text-classification")
>>> # With `DatasetSearchArguments`
>>> filt = DatasetFilter(task=args.task_categories.text_classification)
>>> api.list_models(filter=filt)
>>> # List only the datasets in russian for language modeling
>>> api.list_datasets(
... filter=("language:ru", "task_ids:language-modeling")
... )
>>> # Using the `DatasetFilter`
>>> filt = DatasetFilter(language="ru", task_ids="language-modeling")
>>> # With `DatasetSearchArguments`
>>> filt = DatasetFilter(
... language=args.language.ru,
... task_ids=args.task_ids.language_modeling,
... )
>>> api.list_datasets(filter=filt)
Example usage with the search
argument:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> # List all datasets with "text" in their name
>>> api.list_datasets(search="text")
>>> # List all datasets with "text" in their name made by google
>>> api.list_datasets(search="text", author="google")
list_files_info
< source >(
repo_id: str
paths: Union[List[str], str, None] = None
expand: bool = False
revision: Optional[str] = None
repo_type: Optional[str] = None
token: Optional[Union[bool, str]] = None
)
→
Iterable[RepoFile]
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
paths (
Union[List[str], str, None]
, optional) — The paths to get information about. Paths to files are directly resolved. Paths to folders are resolved recursively which means that information is returned about all files in the folder and its subfolders. IfNone
, all files are returned (the default). If a path do not exist, it is ignored without raising an exception. -
expand (
bool
, optional, defaults toFalse
) — Whether to fetch more information about the files (e.g. last commit and security scan results). This operation is more expensive for the server so only 50 results are returned per page (instead of 1000). As pagination is implemented inhuggingface_hub
, this is transparent for you except for the time it takes to get the results. -
revision (
str
, optional) — The revision of the repository from which to get the information. Defaults to"main"
branch. -
repo_type (
str
, optional) — The type of the repository from which to get the information ("model"
,"dataset"
or"space"
. Defaults to"model"
. -
token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
Iterable[RepoFile]
The information about the files, as an iterable of RepoFile
objects. The order of the files is
not guaranteed.
Raises
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
- RevisionNotFoundError — If revision is not found (error 404) on the repo.
List files on a repo and get information about them.
Takes as input a list of paths. Those paths can be either files or folders. Two server endpoints are called:
- POST “/paths-info” to get information about the provided paths. Called once.
- GET “/tree?recursive=True” to paginate over the input folders. Called only if a folder path is provided as input. Will be called multiple times to follow pagination. If no path is provided as input, step 1. is ignored and all files from the repo are listed.
Examples:
Get information about files on a repo.
>>> from huggingface_hub import list_files_info
>>> files_info = list_files_info("lysandre/arxiv-nlp", ["README.md", "config.json"])
>>> files_info
<generator object HfApi.list_files_info at 0x7f93b848e730>
>>> list(files_info)
[
RepoFile: {"blob_id": "43bd404b159de6fba7c2f4d3264347668d43af25", "lfs": None, "rfilename": "README.md", "size": 391},
RepoFile: {"blob_id": "2f9618c3a19b9a61add74f70bfb121335aeef666", "lfs": None, "rfilename": "config.json", "size": 554},
]
Get even more information about files on a repo (last commit and security scan results)
>>> from huggingface_hub import list_files_info
>>> files_info = list_files_info("prompthero/openjourney-v4", expand=True)
>>> list(files_info)
[
RepoFile: {
{'blob_id': '815004af1a321eaed1d93f850b2e94b0c0678e42',
'lastCommit': {'date': '2023-03-21T09:05:27.000Z',
'id': '47b62b20b20e06b9de610e840282b7e6c3d51190',
'title': 'Upload diffusers weights (#48)'},
'lfs': None,
'rfilename': 'model_index.json',
'security': {'avScan': {'virusFound': False, 'virusNames': None},
'blobId': '815004af1a321eaed1d93f850b2e94b0c0678e42',
'name': 'model_index.json',
'pickleImportScan': None,
'repositoryId': 'models/prompthero/openjourney-v4',
'safe': True},
'size': 584}
},
RepoFile: {
{'blob_id': 'd2343d78b33ac03dade1d525538b02b130d0a3a0',
'lastCommit': {'date': '2023-03-21T09:05:27.000Z',
'id': '47b62b20b20e06b9de610e840282b7e6c3d51190',
'title': 'Upload diffusers weights (#48)'},
'lfs': {'pointer_size': 134,
'sha256': 'dcf4507d99b88db73f3916e2a20169fe74ada6b5582e9af56cfa80f5f3141765',
'size': 334711857},
'rfilename': 'vae/diffusion_pytorch_model.bin',
'security': {'avScan': {'virusFound': False, 'virusNames': None},
'blobId': 'd2343d78b33ac03dade1d525538b02b130d0a3a0',
'name': 'vae/diffusion_pytorch_model.bin',
'pickleImportScan': {'highestSafetyLevel': 'innocuous',
'imports': [{'module': 'torch._utils',
'name': '_rebuild_tensor_v2',
'safety': 'innocuous'},
{'module': 'collections', 'name': 'OrderedDict', 'safety': 'innocuous'},
{'module': 'torch', 'name': 'FloatStorage', 'safety': 'innocuous'}]},
'repositoryId': 'models/prompthero/openjourney-v4',
'safe': True},
'size': 334711857}
},
(...)
]
List LFS files from the “vae/” folder in “stabilityai/stable-diffusion-2” repository.
>>> from huggingface_hub import list_files_info
>>> [info.rfilename for info in list_files_info("stabilityai/stable-diffusion-2", "vae") if info.lfs is not None]
['vae/diffusion_pytorch_model.bin', 'vae/diffusion_pytorch_model.safetensors']
List all files on a repo.
list_liked_repos
< source >( user: Optional[str] = None token: Optional[str] = None ) → UserLikes
Parameters
-
user (
str
, optional) — Name of the user for which you want to fetch the likes. -
token (
str
, optional) — A valid authentication token (see https://huggingface.co/settings/token). Used only ifuser
is not passed to implicitly determine the current user name.
Returns
object containing the user name and 3 lists of repo ids (1 for models, 1 for datasets and 1 for Spaces).
Raises
ValueError
ValueError
— Ifuser
is not passed and no token found (either from argument or from machine).
List all public repos liked by a user on huggingface.co.
This list is public so token is optional. If user
is not passed, it defaults to
the logged in user.
list_metrics
< source >(
)
→
List[MetricInfo]
Returns
List[MetricInfo]
a list of MetricInfo
objects which.
Get the public list of all the metrics on huggingface.co
list_models
< source >(
filter: Union[ModelFilter, str, Iterable[str], None] = None
author: Optional[str] = None
search: Optional[str] = None
emissions_thresholds: Optional[Tuple[float, float]] = None
sort: Union[Literal['lastModified'], str, None] = None
direction: Optional[Literal[-1]] = None
limit: Optional[int] = None
full: Optional[bool] = None
cardData: bool = False
fetch_config: bool = False
token: Optional[Union[bool, str]] = None
)
→
Iterable[ModelInfo]
Parameters
-
filter (ModelFilter or
str
orIterable
, optional) — A string or ModelFilter which can be used to identify models on the Hub. -
author (
str
, optional) — A string which identify the author (user or organization) of the returned models -
search (
str
, optional) — A string that will be contained in the returned model ids. -
emissions_thresholds (
Tuple
, optional) — A tuple of two ints or floats representing a minimum and maximum carbon footprint to filter the resulting models with in grams. -
sort (
Literal["lastModified"]
orstr
, optional) — The key with which to sort the resulting models. Possible values are the properties of the huggingface_hub.hf_api.ModelInfo class. -
direction (
Literal[-1]
orint
, optional) — Direction in which to sort. The value-1
sorts by descending order while all other values sort by ascending order. -
limit (
int
, optional) — The limit on the number of models fetched. Leaving this option toNone
fetches all models. -
full (
bool
, optional) — Whether to fetch all model data, including thelastModified
, thesha
, the files and thetags
. This is set toTrue
by default when using a filter. -
cardData (
bool
, optional) — Whether to grab the metadata for the model as well. Can contain useful information such as carbon emissions, metrics, and datasets trained on. -
fetch_config (
bool
, optional) — Whether to fetch the model configs as well. This is not included infull
due to its size. -
token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
Iterable[ModelInfo]
an iterable of huggingface_hub.hf_api.ModelInfo objects.
List models hosted on the Huggingface Hub, given some filters.
Example usage with the filter
argument:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> # List all models
>>> api.list_models()
>>> # Get all valid search arguments
>>> args = ModelSearchArguments()
>>> # List only the text classification models
>>> api.list_models(filter="text-classification")
>>> # Using the `ModelFilter`
>>> filt = ModelFilter(task="text-classification")
>>> # With `ModelSearchArguments`
>>> filt = ModelFilter(task=args.pipeline_tags.TextClassification)
>>> api.list_models(filter=filt)
>>> # Using `ModelFilter` and `ModelSearchArguments` to find text classification in both PyTorch and TensorFlow
>>> filt = ModelFilter(
... task=args.pipeline_tags.TextClassification,
... library=[args.library.PyTorch, args.library.TensorFlow],
... )
>>> api.list_models(filter=filt)
>>> # List only models from the AllenNLP library
>>> api.list_models(filter="allennlp")
>>> # Using `ModelFilter` and `ModelSearchArguments`
>>> filt = ModelFilter(library=args.library.allennlp)
list_repo_commits
< source >( repo_id: str repo_type: Optional[str] = None token: Optional[Union[bool, str]] = None revision: Optional[str] = None formatted: bool = False ) → List[GitCommitInfo]
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if listing commits from a dataset or a Space,None
or"model"
if listing from a model. Default isNone
. -
token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header. -
revision (
str
, optional) — The git revision to commit from. Defaults to the head of the"main"
branch. -
formatted (
bool
) — Whether to return the HTML-formatted title and description of the commits. Defaults to False.
Returns
List[GitCommitInfo]
list of objects containing information about the commits for a repo on the Hub.
Raises
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
- RevisionNotFoundError — If revision is not found (error 404) on the repo.
Get the list of commits of a given revision for a repo on the Hub.
Commits are sorted by date (last commit first).
Example:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
# Commits are sorted by date (last commit first)
>>> initial_commit = api.list_repo_commits("gpt2")[-1]
# Initial commit is always a system commit containing the `.gitattributes` file.
>>> initial_commit
GitCommitInfo(
commit_id='9b865efde13a30c13e0a33e536cf3e4a5a9d71d8',
authors=['system'],
created_at=datetime.datetime(2019, 2, 18, 10, 36, 15, tzinfo=datetime.timezone.utc),
title='initial commit',
message='',
formatted_title=None,
formatted_message=None
)
# Create an empty branch by deriving from initial commit
>>> api.create_branch("gpt2", "new_empty_branch", revision=initial_commit.commit_id)
list_repo_files
< source >(
repo_id: str
revision: Optional[str] = None
repo_type: Optional[str] = None
timeout: Optional[float] = None
token: Optional[Union[bool, str]] = None
)
→
List[str]
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
revision (
str
, optional) — The revision of the model repository from which to get the information. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. -
token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
List[str]
the list of files in a given repository.
Get the list of files in a given repo.
list_repo_refs
< source >( repo_id: str repo_type: Optional[str] = None token: Optional[Union[bool, str]] = None ) → GitRefs
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if listing refs from a dataset or a Space,None
or"model"
if listing from a model. Default isNone
. -
token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
object containing all information about branches and tags for a repo on the Hub.
Get the list of refs of a given repo (both tags and branches).
Example:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.list_repo_refs("gpt2")
GitRefs(branches=[GitRefInfo(name='main', ref='refs/heads/main', target_commit='e7da7f221d5bf496a48136c0cd264e630fe9fcc8')], converts=[], tags=[])
>>> api.list_repo_refs("bigcode/the-stack", repo_type='dataset')
GitRefs(
branches=[
GitRefInfo(name='main', ref='refs/heads/main', target_commit='18edc1591d9ce72aa82f56c4431b3c969b210ae3'),
GitRefInfo(name='v1.1.a1', ref='refs/heads/v1.1.a1', target_commit='f9826b862d1567f3822d3d25649b0d6d22ace714')
],
converts=[],
tags=[
GitRefInfo(name='v1.0', ref='refs/tags/v1.0', target_commit='c37a8cd1e382064d8aced5e05543c5f7753834da')
]
)
list_spaces
< source >(
filter: Union[str, Iterable[str], None] = None
author: Optional[str] = None
search: Optional[str] = None
sort: Union[Literal['lastModified'], str, None] = None
direction: Optional[Literal[-1]] = None
limit: Optional[int] = None
datasets: Union[str, Iterable[str], None] = None
models: Union[str, Iterable[str], None] = None
linked: bool = False
full: Optional[bool] = None
token: Optional[str] = None
)
→
Iterable[SpaceInfo]
Parameters
-
filter (
str
orIterable
, optional) — A string tag or list of tags that can be used to identify Spaces on the Hub. -
author (
str
, optional) — A string which identify the author of the returned Spaces. -
search (
str
, optional) — A string that will be contained in the returned Spaces. -
sort (
Literal["lastModified"]
orstr
, optional) — The key with which to sort the resulting Spaces. Possible values are the properties of the huggingface_hub.hf_api.SpaceInfo` class. -
direction (
Literal[-1]
orint
, optional) — Direction in which to sort. The value-1
sorts by descending order while all other values sort by ascending order. -
limit (
int
, optional) — The limit on the number of Spaces fetched. Leaving this option toNone
fetches all Spaces. -
datasets (
str
orIterable
, optional) — Whether to return Spaces that make use of a dataset. The name of a specific dataset can be passed as a string. -
models (
str
orIterable
, optional) — Whether to return Spaces that make use of a model. The name of a specific model can be passed as a string. -
linked (
bool
, optional) — Whether to return Spaces that make use of either a model or a dataset. -
full (
bool
, optional) — Whether to fetch all Spaces data, including thelastModified
and thecardData
. -
token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
Iterable[SpaceInfo]
an iterable of huggingface_hub.hf_api.SpaceInfo objects.
List spaces hosted on the Huggingface Hub, given some filters.
merge_pull_request
< source >( repo_id: str discussion_num: int token: Optional[str] = None comment: Optional[str] = None repo_type: Optional[str] = None ) → DiscussionStatusChange
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
discussion_num (
int
) — The number of the Discussion or Pull Request . Must be a strictly positive integer. -
comment (
str
, optional) — An optional comment to post with the status change. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. -
token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token)
Returns
the status change event
Merges a Pull Request.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
model_info
< source >( repo_id: str revision: Optional[str] = None timeout: Optional[float] = None securityStatus: Optional[bool] = None files_metadata: bool = False token: Optional[Union[bool, str]] = None ) → huggingface_hub.hf_api.ModelInfo
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
revision (
str
, optional) — The revision of the model repository from which to get the information. -
timeout (
float
, optional) — Whether to set a timeout for the request to the Hub. -
securityStatus (
bool
, optional) — Whether to retrieve the security status from the model repository as well. -
files_metadata (
bool
, optional) — Whether or not to retrieve metadata for files in the repository (size, LFS metadata, etc). Defaults toFalse
. -
token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
The model repository information.
Get info on one specific model on huggingface.co
Model can be private if you pass an acceptable token or are logged in.
Raises the following errors:
- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access. - RevisionNotFoundError If the revision to download from cannot be found.
move_repo
< source >( from_id: str to_id: str repo_type: Optional[str] = None token: Optional[str] = None )
Parameters
-
from_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. Original repository identifier. -
to_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. Final repository identifier. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. -
token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token)
Moving a repository from namespace1/repo_name1 to namespace2/repo_name2
Note there are certain limitations. For more information about moving repositories, please see https://hf.co/docs/hub/repositories-settings#renaming-or-transferring-a-repo.
Raises the following errors:
- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
pause_space
< source >( repo_id: str token: Optional[str] = None ) → SpaceRuntime
Parameters
-
repo_id (
str
) — ID of the Space to pause. Example:"Salesforce/BLIP2"
. -
token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Returns
Runtime information about your Space including stage=PAUSED
and requested hardware.
Raises
RepositoryNotFoundError or HfHubHTTPError or BadRequestError
- RepositoryNotFoundError — If your Space is not found (error 404). Most probably wrong repo_id or your space is private but you are not authenticated.
- HfHubHTTPError — 403 Forbidden: only the owner of a Space can pause it. If you want to manage a Space that you don’t own, either ask the owner by opening a Discussion or duplicate the Space.
- BadRequestError — If your Space is a static Space. Static Spaces are always running and never billed. If you want to hide a static Space, you can set it to private.
Pause your Space.
A paused Space stops executing until manually restarted by its owner. This is different from the sleeping state in which free Spaces go after 48h of inactivity. Paused time is not billed to your account, no matter the hardware you’ve selected. To restart your Space, use restart_space() and go to your Space settings page.
For more details, please visit the docs.
rename_discussion
< source >( repo_id: str discussion_num: int new_title: str token: Optional[str] = None repo_type: Optional[str] = None ) → DiscussionTitleChange
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
discussion_num (
int
) — The number of the Discussion or Pull Request . Must be a strictly positive integer. -
new_title (
str
) — The new title for the discussion -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. -
token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token)
Returns
the title change event
Renames a Discussion.
Examples:
>>> new_title = "New title, fixing a typo"
>>> HfApi().rename_discussion(
... repo_id="username/repo_name",
... discussion_num=34
... new_title=new_title
... )
# DiscussionTitleChange(id='deadbeef0000000', type='title-change', ...)
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
repo_exists
< source >( repo_id: str repo_type: Optional[str] = None token: Optional[str] = None )
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if getting repository info from a dataset or a space,None
or"model"
if getting repository info from a model. Default isNone
. -
token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Checks if a repository exists on the Hugging Face Hub.
repo_info
< source >(
repo_id: str
revision: Optional[str] = None
repo_type: Optional[str] = None
timeout: Optional[float] = None
files_metadata: bool = False
token: Optional[Union[bool, str]] = None
)
→
Union[SpaceInfo, DatasetInfo, ModelInfo]
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
revision (
str
, optional) — The revision of the repository from which to get the information. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if getting repository info from a dataset or a space,None
or"model"
if getting repository info from a model. Default isNone
. -
timeout (
float
, optional) — Whether to set a timeout for the request to the Hub. -
files_metadata (
bool
, optional) — Whether or not to retrieve metadata for files in the repository (size, LFS metadata, etc). Defaults toFalse
. -
token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
Union[SpaceInfo, DatasetInfo, ModelInfo]
The repository information, as a huggingface_hub.hf_api.DatasetInfo, huggingface_hub.hf_api.ModelInfo or huggingface_hub.hf_api.SpaceInfo object.
Get the info object for a given repo of a given type.
Raises the following errors:
- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access. - RevisionNotFoundError If the revision to download from cannot be found.
request_space_hardware
< source >( repo_id: str hardware: SpaceHardware token: Optional[str] = None sleep_time: Optional[int] = None ) → SpaceRuntime
Parameters
-
repo_id (
str
) — ID of the repo to update. Example:"bigcode/in-the-stack"
. -
hardware (
str
or SpaceHardware) — Hardware on which to run the Space. Example:"t4-medium"
. -
token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided. -
sleep_time (
int
, optional) — Number of seconds of inactivity to wait before a Space is put to sleep. Set to-1
if you don’t want your Space to sleep (default behavior for upgraded hardware). For free hardware, you can’t configure the sleep time (value is fixed to 48 hours of inactivity). See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details.
Returns
Runtime information about a Space including Space stage and hardware.
Request new hardware for a Space.
It is also possible to request hardware directly when creating the Space repo! See create_repo() for details.
request_space_storage
< source >( repo_id: str storage: SpaceStorage token: Optional[str] = None ) → SpaceRuntime
Parameters
-
repo_id (
str
) — ID of the Space to update. Example:"HuggingFaceH4/open_llm_leaderboard"
. -
storage (
str
or SpaceStorage) — Storage tier. Either ‘small’, ‘medium’, or ‘large’. -
token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Returns
Runtime information about a Space including Space stage and hardware.
Request persistent storage for a Space.
It is not possible to decrease persistent storage after its granted. To do so, you must delete it via delete_space_storage().
restart_space
< source >( repo_id: str token: Optional[str] = None factory_reboot: bool = False ) → SpaceRuntime
Parameters
-
repo_id (
str
) — ID of the Space to restart. Example:"Salesforce/BLIP2"
. -
token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided. -
factory_reboot (
bool
, optional) — IfTrue
, the Space will be rebuilt from scratch without caching any requirements.
Returns
Runtime information about your Space.
Raises
RepositoryNotFoundError or HfHubHTTPError or BadRequestError
- RepositoryNotFoundError — If your Space is not found (error 404). Most probably wrong repo_id or your space is private but you are not authenticated.
- HfHubHTTPError — 403 Forbidden: only the owner of a Space can restart it. If you want to restart a Space that you don’t own, either ask the owner by opening a Discussion or duplicate the Space.
- BadRequestError — If your Space is a static Space. Static Spaces are always running and never billed. If you want to hide a static Space, you can set it to private.
Restart your Space.
This is the only way to programmatically restart a Space if you’ve put it on Pause (see pause_space()). You must be the owner of the Space to restart it. If you are using an upgraded hardware, your account will be billed as soon as the Space is restarted. You can trigger a restart no matter the current state of a Space.
For more details, please visit the docs.
run_as_future
< source >(
fn: Callable[..., R]
*args
**kwargs
)
→
Future
Parameters
-
fn (
Callable
) — The method to run in the background. - *args, **kwargs — Arguments with which the method will be called.
Returns
Future
a Future instance to get the result of the task.
Run a method in the background and return a Future instance.
The main goal is to run methods without blocking the main thread (e.g. to push data during a training). Background jobs are queued to preserve order but are not ran in parallel. If you need to speed-up your scripts by parallelizing lots of call to the API, you must setup and use your own ThreadPoolExecutor.
Note: Most-used methods like upload_file(), upload_folder() and create_commit() have a run_as_future: bool
argument to directly call them in the background. This is equivalent to calling api.run_as_future(...)
on them
but less verbose.
set_space_sleep_time
< source >( repo_id: str sleep_time: int token: Optional[str] = None ) → SpaceRuntime
Parameters
-
repo_id (
str
) — ID of the repo to update. Example:"bigcode/in-the-stack"
. -
sleep_time (
int
, optional) — Number of seconds of inactivity to wait before a Space is put to sleep. Set to-1
if you don’t want your Space to pause (default behavior for upgraded hardware). For free hardware, you can’t configure the sleep time (value is fixed to 48 hours of inactivity). See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details. -
token (
str
, optional) — Hugging Face token. Will default to the locally saved token if not provided.
Returns
Runtime information about a Space including Space stage and hardware.
Set a custom sleep time for a Space running on upgraded hardware..
Your Space will go to sleep after X seconds of inactivity. You are not billed when your Space is in “sleep” mode. If a new visitor lands on your Space, it will “wake it up”. Only upgraded hardware can have a configurable sleep time. To know more about the sleep stage, please refer to https://huggingface.co/docs/hub/spaces-gpus#sleep-time.
It is also possible to set a custom sleep time when requesting hardware with request_space_hardware().
snapshot_download
< source >( repo_id: str repo_type: Optional[str] = None revision: Optional[str] = None cache_dir: Union[str, Path, None] = None local_dir: Union[str, Path, None] = None local_dir_use_symlinks: Union[bool, Literal['auto']] = 'auto' proxies: Optional[Dict] = None etag_timeout: float = 10 resume_download: bool = False force_download: bool = False local_files_only: bool = False allow_patterns: Optional[Union[List[str], str]] = None ignore_patterns: Optional[Union[List[str], str]] = None max_workers: int = 8 tqdm_class: Optional[base_tqdm] = None )
Parameters
-
repo_id (
str
) — A user or an organization name and a repo name separated by a/
. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if downloading from a dataset or space,None
or"model"
if downloading from a model. Default isNone
. -
revision (
str
, optional) — An optional Git revision id which can be a branch name, a tag, or a commit hash. -
cache_dir (
str
,Path
, optional) — Path to the folder where cached files are stored. -
local_dir (
str
orPath
, optional) — If provided, the downloaded files will be placed under this directory, either as symlinks (default) or regular files (see description for more details). -
local_dir_use_symlinks (
"auto"
orbool
, defaults to"auto"
) — To be used withlocal_dir
. If set to “auto”, the cache directory will be used and the file will be either duplicated or symlinked to the local directory depending on its size. It set toTrue
, a symlink will be created, no matter the file size. If set toFalse
, the file will either be duplicated from cache (if already exists) or downloaded from the Hub and not cached. See description for more details. -
proxies (
dict
, optional) — Dictionary mapping protocol to the URL of the proxy passed torequests.request
. -
etag_timeout (
float
, optional, defaults to10
) — When fetching ETag, how many seconds to wait for the server to send data before giving up which is passed torequests.request
. -
resume_download (
bool
, optional, defaults toFalse) -- If
True`, resume a previously interrupted download. -
force_download (
bool
, optional, defaults toFalse
) — Whether the file should be downloaded even if it already exists in the local cache. -
local_files_only (
bool
, optional, defaults toFalse
) — IfTrue
, avoid downloading the file and return the path to the local cached file if it exists. -
allow_patterns (
List[str]
orstr
, optional) — If provided, only files matching at least one pattern are downloaded. -
ignore_patterns (
List[str]
orstr
, optional) — If provided, files matching any of the patterns are not downloaded. -
max_workers (
int
, optional) — Number of concurrent threads to download files (1 thread = 1 file download). Defaults to 8. -
tqdm_class (
tqdm
, optional) — If provided, overwrites the default behavior for the progress bar. Passed argument must inherit fromtqdm.auto.tqdm
or at least mimic its behavior. Note that thetqdm_class
is not passed to each individual download. Defaults to the custom HF progress bar that can be disabled by settingHF_HUB_DISABLE_PROGRESS_BARS
environment variable.
Download repo files.
Download a whole snapshot of a repo’s files at the specified revision. This is useful when you want all files from
a repo, because you don’t know which ones you will need a priori. All files are nested inside a folder in order
to keep their actual filename relative to that folder. You can also filter which files to download using
allow_patterns
and ignore_patterns
.
If local_dir
is provided, the file structure from the repo will be replicated in this location. You can configure
how you want to move those files:
- If
local_dir_use_symlinks="auto"
(default), files are downloaded and stored in the cache directory as blob files. Small files (<5MB) are duplicated inlocal_dir
while a symlink is created for bigger files. The goal is to be able to manually edit and save small files without corrupting the cache while saving disk space for binary files. The 5MB threshold can be configured with theHF_HUB_LOCAL_DIR_AUTO_SYMLINK_THRESHOLD
environment variable. - If
local_dir_use_symlinks=True
, files are downloaded, stored in the cache directory and symlinked inlocal_dir
. This is optimal in term of disk usage but files must not be manually edited. - If
local_dir_use_symlinks=False
and the blob files exist in the cache directory, they are duplicated in the local dir. This means disk usage is not optimized. - Finally, if
local_dir_use_symlinks=False
and the blob files do not exist in the cache directory, then the files are downloaded and directly placed underlocal_dir
. This means if you need to download them again later, they will be re-downloaded entirely.
An alternative would be to clone the repo but this requires git and git-lfs to be installed and properly configured. It is also not possible to filter which files to download when cloning a repository using git.
Raises the following errors:
EnvironmentError
iftoken=True
and the token cannot be found.OSError
if ETag cannot be determined.ValueError
if some parameter value is invalid
space_info
< source >( repo_id: str revision: Optional[str] = None timeout: Optional[float] = None files_metadata: bool = False token: Optional[Union[bool, str]] = None ) → SpaceInfo
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
revision (
str
, optional) — The revision of the space repository from which to get the information. -
timeout (
float
, optional) — Whether to set a timeout for the request to the Hub. -
files_metadata (
bool
, optional) — Whether or not to retrieve metadata for files in the repository (size, LFS metadata, etc). Defaults toFalse
. -
token (
bool
orstr
, optional) — A valid authentication token (see https://huggingface.co/settings/token). IfNone
orTrue
and machine is logged in (throughhuggingface-cli login
or login()), token will be retrieved from the cache. IfFalse
, token is not sent in the request header.
Returns
The space repository information.
Get info on one specific Space on huggingface.co.
Space can be private if you pass an acceptable token.
Raises the following errors:
- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access. - RevisionNotFoundError If the revision to download from cannot be found.
super_squash_history
< source >( repo_id: str branch: Optional[str] = None commit_message: Optional[str] = None repo_type: Optional[str] = None token: Optional[str] = None )
Parameters
-
repo_id (
str
) — A namespace (user or an organization) and a repo name separated by a/
. -
branch (
str
, optional) — The branch to squash. Defaults to the head of the"main"
branch. -
commit_message (
str
, optional) — The commit message to use for the squashed commit. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if listing commits from a dataset or a Space,None
or"model"
if listing from a model. Default isNone
. -
token (
str
, optional) — A valid authentication token (see https://huggingface.co/settings/token). If the machine is logged in (throughhuggingface-cli login
or login()), token can be automatically retrieved from the cache.
Raises
RepositoryNotFoundError or RevisionNotFoundError or BadRequestError
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
- RevisionNotFoundError — If the branch to squash cannot be found.
- BadRequestError — If invalid reference for a branch. You cannot squash history on tags.
Squash commit history on a branch for a repo on the Hub.
Squashing the repo history is useful when you know you’ll make hundreds of commits and you don’t want to clutter the history. Squashing commits can only be performed from the head of a branch.
Once squashed, the commit history cannot be retrieved. This is a non-revertible operation.
Once the history of a branch has been squashed, it is not possible to merge it back into another branch since their history will have diverged.
Example:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
# Create repo
>>> repo_id = api.create_repo("test-squash").repo_id
# Make a lot of commits.
>>> api.upload_file(repo_id=repo_id, path_in_repo="file.txt", path_or_fileobj=b"content")
>>> api.upload_file(repo_id=repo_id, path_in_repo="lfs.bin", path_or_fileobj=b"content")
>>> api.upload_file(repo_id=repo_id, path_in_repo="file.txt", path_or_fileobj=b"another_content")
# Squash history
>>> api.super_squash_history(repo_id=repo_id)
unlike
< source >( repo_id: str token: Optional[str] = None repo_type: Optional[str] = None )
Parameters
-
repo_id (
str
) — The repository to unlike. Example:"user/my-cool-model"
. -
token (
str
, optional) — Authentication token. Will default to the stored token. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if unliking a dataset or space,None
or"model"
if unliking a model. Default isNone
.
Raises
- RepositoryNotFoundError — If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo does not exist.
Unlike a given repo on the Hub (e.g. remove from favorite list).
See also like() and list_liked_repos().
update_repo_visibility
< source >( repo_id: str private: bool = False token: Optional[str] = None organization: Optional[str] = None repo_type: Optional[str] = None name: Optional[str] = None )
Parameters
-
repo_id (
str
, optional) — A namespace (user or an organization) and a repo name separated by a/
. -
private (
bool
, optional, defaults toFalse
) — Whether the model repo should be private. -
token (
str
, optional) — An authentication token (See https://huggingface.co/settings/token) -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
.
Update the visibility setting of a repository.
Raises the following errors:
- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access.
upload_file
< source >(
path_or_fileobj: Union[str, Path, bytes, BinaryIO]
path_in_repo: str
repo_id: str
token: Optional[str] = None
repo_type: Optional[str] = None
revision: Optional[str] = None
commit_message: Optional[str] = None
commit_description: Optional[str] = None
create_pr: Optional[bool] = None
parent_commit: Optional[str] = None
run_as_future: bool = False
)
→
str
or Future
Parameters
-
path_or_fileobj (
str
,Path
,bytes
, orIO
) — Path to a file on the local machine or binary data stream / fileobj / buffer. -
path_in_repo (
str
) — Relative filepath in the repo, for example:"checkpoints/1fec34a/weights.bin"
-
repo_id (
str
) — The repository to which the file will be uploaded, for example:"username/custom_transformers"
-
token (
str
, optional) — Authentication token, obtained withHfApi.login
method. Will default to the stored token. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. -
revision (
str
, optional) — The git revision to commit from. Defaults to the head of the"main"
branch. -
commit_message (
str
, optional) — The summary / title / first line of the generated commit -
commit_description (
str
optional) — The description of the generated commit -
create_pr (
boolean
, optional) — Whether or not to create a Pull Request with that commit. Defaults toFalse
. Ifrevision
is not set, PR is opened against the"main"
branch. Ifrevision
is set and is a branch, PR is opened against this branch. Ifrevision
is set and is not a branch name (example: a commit oid), anRevisionNotFoundError
is returned by the server. -
parent_commit (
str
, optional) — The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. If specified andcreate_pr
isFalse
, the commit will fail ifrevision
does not point toparent_commit
. If specified andcreate_pr
isTrue
, the pull request will be created fromparent_commit
. Specifyingparent_commit
ensures the repo has not changed before committing the changes, and can be especially useful if the repo is updated / committed to concurrently. -
run_as_future (
bool
, optional) — Whether or not to run this method in the background. Background jobs are run sequentially without blocking the main thread. Passingrun_as_future=True
will return a Future object. Defaults toFalse
.
Returns
str
or Future
The URL to visualize the uploaded file on the hub. If run_as_future=True
is passed,
returns a Future object which will contain the result when executed.
Upload a local file (up to 50 GB) to the given repo. The upload is done through a HTTP post request, and doesn’t require git or git-lfs to be installed.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it doesn’t exist,
or because it is set to
private
and you do not have access. - RevisionNotFoundError If the revision to download from cannot be found.
upload_file
assumes that the repo already exists on the Hub. If you get a
Client error 404, please make sure you are authenticated and that repo_id
and
repo_type
are set correctly. If repo does not exist, create it first using
create_repo().
Example:
>>> from huggingface_hub import upload_file
>>> with open("./local/filepath", "rb") as fobj:
... upload_file(
... path_or_fileobj=fileobj,
... path_in_repo="remote/file/path.h5",
... repo_id="username/my-dataset",
... repo_type="dataset",
... token="my_token",
... )
"https://huggingface.co/datasets/username/my-dataset/blob/main/remote/file/path.h5"
>>> upload_file(
... path_or_fileobj=".\\local\\file\\path",
... path_in_repo="remote/file/path.h5",
... repo_id="username/my-model",
... token="my_token",
... )
"https://huggingface.co/username/my-model/blob/main/remote/file/path.h5"
>>> upload_file(
... path_or_fileobj=".\\local\\file\\path",
... path_in_repo="remote/file/path.h5",
... repo_id="username/my-model",
... token="my_token",
... create_pr=True,
... )
"https://huggingface.co/username/my-model/blob/refs%2Fpr%2F1/remote/file/path.h5"
upload_folder
< source >(
repo_id: str
folder_path: Union[str, Path]
path_in_repo: Optional[str] = None
commit_message: Optional[str] = None
commit_description: Optional[str] = None
token: Optional[str] = None
repo_type: Optional[str] = None
revision: Optional[str] = None
create_pr: Optional[bool] = None
parent_commit: Optional[str] = None
allow_patterns: Optional[Union[List[str], str]] = None
ignore_patterns: Optional[Union[List[str], str]] = None
delete_patterns: Optional[Union[List[str], str]] = None
multi_commits: bool = False
multi_commits_verbose: bool = False
run_as_future: bool = False
)
→
str
or Future[str]
Parameters
-
repo_id (
str
) — The repository to which the file will be uploaded, for example:"username/custom_transformers"
-
folder_path (
str
orPath
) — Path to the folder to upload on the local file system -
path_in_repo (
str
, optional) — Relative path of the directory in the repo, for example:"checkpoints/1fec34a/results"
. Will default to the root folder of the repository. -
token (
str
, optional) — Authentication token, obtained withHfApi.login
method. Will default to the stored token. -
repo_type (
str
, optional) — Set to"dataset"
or"space"
if uploading to a dataset or space,None
or"model"
if uploading to a model. Default isNone
. -
revision (
str
, optional) — The git revision to commit from. Defaults to the head of the"main"
branch. -
commit_message (
str
, optional) — The summary / title / first line of the generated commit. Defaults to:f"Upload {path_in_repo} with huggingface_hub"
-
commit_description (
str
optional) — The description of the generated commit -
create_pr (
boolean
, optional) — Whether or not to create a Pull Request with that commit. Defaults toFalse
. Ifrevision
is not set, PR is opened against the"main"
branch. Ifrevision
is set and is a branch, PR is opened against this branch. Ifrevision
is set and is not a branch name (example: a commit oid), anRevisionNotFoundError
is returned by the server. If bothmulti_commits
andcreate_pr
are True, the PR created in the multi-commit process is kept opened. -
parent_commit (
str
, optional) — The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. If specified andcreate_pr
isFalse
, the commit will fail ifrevision
does not point toparent_commit
. If specified andcreate_pr
isTrue
, the pull request will be created fromparent_commit
. Specifyingparent_commit
ensures the repo has not changed before committing the changes, and can be especially useful if the repo is updated / committed to concurrently. -
allow_patterns (
List[str]
orstr
, optional) — If provided, only files matching at least one pattern are uploaded. -
ignore_patterns (
List[str]
orstr
, optional) — If provided, files matching any of the patterns are not uploaded. -
delete_patterns (
List[str]
orstr
, optional) — If provided, remote files matching any of the patterns will be deleted from the repo while committing new files. This is useful if you don’t know which files have already been uploaded. Note: to avoid discrepancies the.gitattributes
file is not deleted even if it matches the pattern. -
multi_commits (
bool
) — If True, changes are pushed to a PR using a multi-commit process. Defaults toFalse
. -
multi_commits_verbose (
bool
) — If True andmulti_commits
is used, more information will be displayed to the user. -
run_as_future (
bool
, optional) — Whether or not to run this method in the background. Background jobs are run sequentially without blocking the main thread. Passingrun_as_future=True
will return a Future object. Defaults toFalse
.
Returns
str
or Future[str]
A URL to visualize the uploaded folder on the hub. If run_as_future=True
is passed,
returns a Future object which will contain the result when executed.
Upload a local folder to the given repo. The upload is done through a HTTP requests, and doesn’t require git or git-lfs to be installed.
The structure of the folder will be preserved. Files with the same name already present in the repository will be overwritten. Others will be left untouched.
Use the allow_patterns
and ignore_patterns
arguments to specify which files to upload. These parameters
accept either a single pattern or a list of patterns. Patterns are Standard Wildcards (globbing patterns) as
documented here. If both allow_patterns
and
ignore_patterns
are provided, both constraints apply. By default, all files from the folder are uploaded.
Use the delete_patterns
argument to specify remote files you want to delete. Input type is the same as for
allow_patterns
(see above). If path_in_repo
is also provided, the patterns are matched against paths
relative to this folder. For example, upload_folder(..., path_in_repo="experiment", delete_patterns="logs/*")
will delete any remote file under ./experiment/logs/
. Note that the .gitattributes
file will not be deleted
even if it matches the patterns.
Any .git/
folder present in any subdirectory will be ignored. However, please be aware that the .gitignore
file is not taken into account.
Uses HfApi.create_commit
under the hood.
Raises the following errors:
HTTPError
if the HuggingFace API returned an errorValueError
if some parameter value is invalid
upload_folder
assumes that the repo already exists on the Hub. If you get a Client error 404, please make
sure you are authenticated and that repo_id
and repo_type
are set correctly. If repo does not exist, create
it first using create_repo().
multi_commits
is experimental. Its API and behavior is subject to change in the future without prior notice.
Example:
# Upload checkpoints folder except the log files
>>> upload_folder(
... folder_path="local/checkpoints",
... path_in_repo="remote/experiment/checkpoints",
... repo_id="username/my-dataset",
... repo_type="datasets",
... token="my_token",
... ignore_patterns="**/logs/*.txt",
... )
# "https://huggingface.co/datasets/username/my-dataset/tree/main/remote/experiment/checkpoints"
# Upload checkpoints folder including logs while deleting existing logs from the repo
# Useful if you don't know exactly which log files have already being pushed
>>> upload_folder(
... folder_path="local/checkpoints",
... path_in_repo="remote/experiment/checkpoints",
... repo_id="username/my-dataset",
... repo_type="datasets",
... token="my_token",
... delete_patterns="**/logs/*.txt",
... )
"https://huggingface.co/datasets/username/my-dataset/tree/main/remote/experiment/checkpoints"
# Upload checkpoints folder while creating a PR
>>> upload_folder(
... folder_path="local/checkpoints",
... path_in_repo="remote/experiment/checkpoints",
... repo_id="username/my-dataset",
... repo_type="datasets",
... token="my_token",
... create_pr=True,
... )
"https://huggingface.co/datasets/username/my-dataset/tree/refs%2Fpr%2F1/remote/experiment/checkpoints"
whoami
< source >( token: Optional[str] = None )
Call HF API to know “whoami”.
huggingface_hub.plan_multi_commits
< source >(
operations: typing.Iterable[typing.Union[huggingface_hub._commit_api.CommitOperationAdd, huggingface_hub._commit_api.CommitOperationDelete]]
max_operations_per_commit: int = 50
max_upload_size_per_commit: int = 2147483648
)
→
Tuple[List[List[CommitOperationAdd]], List[List[CommitOperationDelete]]]
Parameters
-
operations (
List
ofCommitOperation()
) — The list of operations to split into commits. -
max_operations_per_commit (
int
) — Maximum number of operations in a single commit. Defaults to 50. -
max_upload_size_per_commit (
int
) — Maximum size to upload (in bytes) in a single commit. Defaults to 2GB. Files bigger than this limit are uploaded, 1 per commit.
Returns
Tuple[List[List[CommitOperationAdd]], List[List[CommitOperationDelete]]]
a tuple. First item is a list of lists of CommitOperationAdd representing the addition commits to push. The second item is a list of lists of CommitOperationDelete representing the deletion commits.
Split a list of operations in a list of commits to perform.
Implementation follows a sub-optimal (yet simple) algorithm:
- Delete operations are grouped together by commits of maximum
max_operations_per_commits
operations. - All additions exceeding
max_upload_size_per_commit
are committed 1 by 1. - All remaining additions are grouped together and split each time the
max_operations_per_commit
or themax_upload_size_per_commit
limit is reached.
We do not try to optimize the splitting to get the lowest number of commits as this is a NP-hard problem (see bin packing problem). For our use case, it is not problematic to use a sub-optimal solution so we favored an easy-to-explain implementation.
plan_multi_commits
is experimental. Its API and behavior is subject to change in the future without prior notice.
Example:
>>> from huggingface_hub import HfApi, plan_multi_commits
>>> addition_commits, deletion_commits = plan_multi_commits(
... operations=[
... CommitOperationAdd(...),
... CommitOperationAdd(...),
... CommitOperationDelete(...),
... CommitOperationDelete(...),
... CommitOperationAdd(...),
... ],
... )
>>> HfApi().create_commits_on_pr(
... repo_id="my-cool-model",
... addition_commits=addition_commits,
... deletion_commits=deletion_commits,
... (...)
... verbose=True,
... )
The initial order of the operations is not guaranteed! All deletions will be performed before additions. If you are not updating multiple times the same file, you are fine.
API Dataclasses
CommitInfo
class huggingface_hub.CommitInfo
< source >( commit_url: str commit_message: str commit_description: str oid: str pr_url: Optional[str] = None )
Parameters
-
commit_url (
str
) — Url where to find the commit. -
commit_message (
str
) — The summary (first line) of the commit that has been created. -
commit_description (
str
) — Description of the commit that has been created. Can be empty. -
oid (
str
) — Commit hash id. Example:"91c54ad1727ee830252e457677f467be0bfd8a57"
. -
pr_url (
str
, optional) — Url to the PR that has been created, if any. Populated whencreate_pr=True
is passed. -
pr_revision (
str
, optional) — Revision of the PR that has been created, if any. Populated whencreate_pr=True
is passed. Example:"refs/pr/1"
. -
pr_num (
int
, optional) — Number of the PR discussion that has been created, if any. Populated whencreate_pr=True
is passed. Can be passed asdiscussion_num
in get_discussion_details(). Example:1
.
Data structure containing information about a newly created commit.
Returned by create_commit().
DatasetInfo
class huggingface_hub.hf_api.DatasetInfo
< source >( id: Optional[str] = None sha: Optional[str] = None lastModified: Optional[str] = None tags: Optional[List[str]] = None siblings: Optional[List[Dict]] = None private: bool = False author: Optional[str] = None description: Optional[str] = None citation: Optional[str] = None cardData: Optional[dict] = None **kwargs )
Parameters
-
id (
str
, optional) — ID of dataset repository. -
sha (
str
, optional) — repo sha at this particular revision -
lastModified (
str
, optional) — date of last commit to repo -
tags (
List[str]
, optional) — List of tags. -
siblings (
List[RepoFile]
, optional) — list of huggingface_hub.hf_api.RepoFile objects that constitute the dataset. -
private (
bool
, optional, defaults toFalse
) — is the repo private -
author (
str
, optional) — repo author -
description (
str
, optional) — Description of the dataset -
citation (
str
, optional) — Dataset citation -
cardData (
Dict
, optional) — Metadata of the model card as a dictionary. -
kwargs (
Dict
, optional) — Kwargs that will be become attributes of the class.
Info about a dataset accessible from huggingface.co
GitRefInfo
class huggingface_hub.GitRefInfo
< source >( data: Dict )
Contains information about a git reference for a repo on the Hub.
GitCommitInfo
class huggingface_hub.GitCommitInfo
< source >( data: Dict )
Parameters
-
commit_id (
str
) — OID of the commit (e.g."e7da7f221d5bf496a48136c0cd264e630fe9fcc8"
) -
authors (
List[str]
) — List of authors of the commit. -
created_at (
datetime
) — Datetime when the commit was created. -
title (
str
) — Title of the commit. This is a free-text value entered by the authors. -
message (
str
) — Description of the commit. This is a free-text value entered by the authors. -
formatted_title (
str
) — Title of the commit formatted as HTML. Only returned ifformatted=True
is set. -
formatted_message (
str
) — Description of the commit formatted as HTML. Only returned ifformatted=True
is set.
Contains information about a git commit for a repo on the Hub. Check out list_repo_commits() for more details.
GitRefs
class huggingface_hub.GitRefs
< source >( branches: List[GitRefInfo] converts: List[GitRefInfo] tags: List[GitRefInfo] )
Parameters
-
branches (
List[GitRefInfo]
) — A list of GitRefInfo containing information about branches on the repo. -
converts (
List[GitRefInfo]
) — A list of GitRefInfo containing information about “convert” refs on the repo. Converts are refs used (internally) to push preprocessed data in Dataset repos. -
tags (
List[GitRefInfo]
) — A list of GitRefInfo containing information about tags on the repo.
Contains information about all git references for a repo on the Hub.
Object is returned by list_repo_refs().
ModelInfo
class huggingface_hub.hf_api.ModelInfo
< source >( modelId: Optional[str] = None sha: Optional[str] = None lastModified: Optional[str] = None tags: Optional[List[str]] = None pipeline_tag: Optional[str] = None siblings: Optional[List[Dict]] = None private: bool = False author: Optional[str] = None config: Optional[Dict] = None securityStatus: Optional[Dict] = None **kwargs )
Parameters
-
modelId (
str
, optional) — ID of model repository. -
sha (
str
, optional) — repo sha at this particular revision -
lastModified (
str
, optional) — date of last commit to repo -
tags (
List[str]
, optional) — List of tags. -
pipeline_tag (
str
, optional) — Pipeline tag to identify the correct widget. -
siblings (
List[RepoFile]
, optional) — list of (huggingface_hub.hf_api.RepoFile) objects that constitute the model. -
private (
bool
, optional, defaults toFalse
) — is the repo private -
author (
str
, optional) — repo author -
config (
Dict
, optional) — Model configuration information -
securityStatus (
Dict
, optional) — Security status of the model. Example:{"containsInfected": False}
-
kwargs (
Dict
, optional) — Kwargs that will be become attributes of the class.
Info about a model accessible from huggingface.co
RepoFile
class huggingface_hub.hf_api.RepoFile
< source >( rfilename: str size: Optional[int] = None blobId: Optional[str] = None lfs: Optional[BlobLfsInfo] = None **kwargs )
Parameters
- rfilename (str) — file name, relative to the repo root. This is the only attribute that’s guaranteed to be here, but under certain conditions there can certain other stuff.
-
size (
int
, optional) — The file’s size, in bytes. This attribute is present whenfiles_metadata
argument of repo_info() is set toTrue
. It’sNone
otherwise. -
blob_id (
str
, optional) — The file’s git OID. This attribute is present whenfiles_metadata
argument of repo_info() is set toTrue
. It’sNone
otherwise. -
lfs (
BlobLfsInfo
, optional) — The file’s LFS metadata. This attribute is present whenfiles_metadata
argument of repo_info() is set toTrue
and the file is stored with Git LFS. It’sNone
otherwise.
Data structure that represents a public file inside a repo, accessible from huggingface.co
RepoUrl
class huggingface_hub.RepoUrl
< source >( url: Any endpoint: Optional[str] = None )
Parameters
-
url (
Any
) — String value of the repo url. -
endpoint (
str
, optional) — Endpoint of the Hub. Defaults to https://huggingface.co.
Raises
- —
ValueError
If URL cannot be parsed.
- —
- —
ValueError
Ifrepo_type
is unknown.
- —
Subclass of str
describing a repo URL on the Hub.
RepoUrl
is returned by HfApi.create_repo
. It inherits from str
for backward
compatibility. At initialization, the URL is parsed to populate properties:
- endpoint (
str
) - namespace (
Optional[str]
) - repo_name (
str
) - repo_id (
str
) - repo_type (
Literal["model", "dataset", "space"]
) - url (
str
)
Example:
>>> RepoUrl('https://huggingface.co/gpt2')
RepoUrl('https://huggingface.co/gpt2', endpoint='https://huggingface.co', repo_type='model', repo_id='gpt2')
>>> RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co')
RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co', repo_type='dataset', repo_id='dummy_user/dummy_dataset')
>>> RepoUrl('hf://datasets/my-user/my-dataset')
RepoUrl('hf://datasets/my-user/my-dataset', endpoint='https://huggingface.co', repo_type='dataset', repo_id='user/dataset')
>>> HfApi.create_repo("dummy_model")
RepoUrl('https://huggingface.co/Wauplin/dummy_model', endpoint='https://huggingface.co', repo_type='model', repo_id='Wauplin/dummy_model')
SpaceInfo
class huggingface_hub.hf_api.SpaceInfo
< source >( id: Optional[str] = None sha: Optional[str] = None lastModified: Optional[str] = None siblings: Optional[List[Dict]] = None private: bool = False author: Optional[str] = None **kwargs )
Parameters
-
id (
str
, optional) — id of space -
sha (
str
, optional) — repo sha at this particular revision -
lastModified (
str
, optional) — date of last commit to repo -
siblings (
List[RepoFile]
, optional) — list ofhuggingface_hub.hf_api.RepoFIle
objects that constitute the Space -
private (
bool
, optional, defaults toFalse
) — is the repo private -
author (
str
, optional) — repo author -
kwargs (
Dict
, optional) — Kwargs that will be become attributes of the class.
Info about a Space accessible from huggingface.co
This is a “dataclass” like container that just sets on itself any attribute passed by the server.
UserLikes
class huggingface_hub.UserLikes
< source >( user: str total: int datasets: List[str] models: List[str] spaces: List[str] )
Parameters
-
user (
str
) — Name of the user for which we fetched the likes. -
total (
int
) — Total number of likes. -
datasets (
List[str]
) — List of datasets liked by the user (as repo_ids). -
models (
List[str]
) — List of models liked by the user (as repo_ids). -
spaces (
List[str]
) — List of spaces liked by the user (as repo_ids).
Contains information about a user likes on the Hub.
CommitOperation
Below are the supported values for CommitOperation()
:
class huggingface_hub.CommitOperationAdd
< source >( path_in_repo: str path_or_fileobj: typing.Union[str, pathlib.Path, bytes, typing.BinaryIO] )
Parameters
-
path_in_repo (
str
) — Relative filepath in the repo, for example:"checkpoints/1fec34a/weights.bin"
-
path_or_fileobj (
str
,Path
,bytes
, orBinaryIO
) — Either:- a path to a local file (as
str
orpathlib.Path
) to upload - a buffer of bytes (
bytes
) holding the content of the file to upload - a “file object” (subclass of
io.BufferedIOBase
), typically obtained withopen(path, "rb")
. It must supportseek()
andtell()
methods.
- a path to a local file (as
Raises
ValueError
ValueError
— Ifpath_or_fileobj
is not one ofstr
,Path
,bytes
orio.BufferedIOBase
.ValueError
— Ifpath_or_fileobj
is astr
orPath
but not a path to an existing file.ValueError
— Ifpath_or_fileobj
is aio.BufferedIOBase
but it doesn’t support bothseek()
andtell()
.
Data structure holding necessary info to upload a file to a repository on the Hub.
as_file
< source >( with_tqdm: bool = False )
A context manager that yields a file-like object allowing to read the underlying
data behind path_or_fileobj
.
Example:
>>> operation = CommitOperationAdd(
... path_in_repo="remote/dir/weights.h5",
... path_or_fileobj="./local/weights.h5",
... )
CommitOperationAdd(path_in_repo='remote/dir/weights.h5', path_or_fileobj='./local/weights.h5')
>>> with operation.as_file() as file:
... content = file.read()
>>> with operation.as_file(with_tqdm=True) as file:
... while True:
... data = file.read(1024)
... if not data:
... break
config.json: 100%|█████████████████████████| 8.19k/8.19k [00:02<00:00, 3.72kB/s]
>>> with operation.as_file(with_tqdm=True) as file:
... requests.put(..., data=file)
config.json: 100%|█████████████████████████| 8.19k/8.19k [00:02<00:00, 3.72kB/s]
class huggingface_hub.CommitOperationDelete
< source >( path_in_repo: str is_folder: typing.Union[bool, typing.Literal['auto']] = 'auto' )
Parameters
-
path_in_repo (
str
) — Relative filepath in the repo, for example:"checkpoints/1fec34a/weights.bin"
for a file or"checkpoints/1fec34a/"
for a folder. -
is_folder (
bool
orLiteral["auto"]
, optional) — Whether the Delete Operation applies to a folder or not. If “auto”, the path type (file or folder) is guessed automatically by looking if path ends with a ”/” (folder) or not (file). To explicitly set the path type, you can setis_folder=True
oris_folder=False
.
Data structure holding necessary info to delete a file or a folder from a repository on the Hub.
class huggingface_hub.CommitOperationCopy
< source >( src_path_in_repo: str path_in_repo: str src_revision: typing.Optional[str] = None )
Parameters
-
src_path_in_repo (
str
) — Relative filepath in the repo of the file to be copied, e.g."checkpoints/1fec34a/weights.bin"
. -
path_in_repo (
str
) — Relative filepath in the repo where to copy the file, e.g."checkpoints/1fec34a/weights_copy.bin"
. -
src_revision (
str
, optional) — The git revision of the file to be copied. Can be any valid git revision. Default to the target commit revision.
Data structure holding necessary info to copy a file in a repository on the Hub.
Limitations:
- Only LFS files can be copied. To copy a regular file, you need to download it locally and re-upload it
- Cross-repository copies are not supported.
Note: you can combine a CommitOperationCopy and a CommitOperationDelete to rename an LFS file on the Hub.
CommitScheduler
class huggingface_hub.CommitScheduler
< source >( repo_id: str folder_path: typing.Union[str, pathlib.Path] every: typing.Union[int, float] = 5 path_in_repo: typing.Optional[str] = None repo_type: typing.Optional[str] = None revision: typing.Optional[str] = None private: bool = False token: typing.Optional[str] = None allow_patterns: typing.Union[typing.List[str], str, NoneType] = None ignore_patterns: typing.Union[typing.List[str], str, NoneType] = None squash_history: bool = False hf_api: typing.Optional[ForwardRef('HfApi')] = None )
Parameters
-
repo_id (
str
) — The id of the repo to commit to. -
folder_path (
str
orPath
) — Path to the local folder to upload regularly. -
every (
int
orfloat
, optional) — The number of minutes between each commit. Defaults to 5 minutes. -
path_in_repo (
str
, optional) — Relative path of the directory in the repo, for example:"checkpoints/"
. Defaults to the root folder of the repository. -
repo_type (
str
, optional) — The type of the repo to commit to. Defaults tomodel
. -
revision (
str
, optional) — The revision of the repo to commit to. Defaults tomain
. -
private (
bool
, optional) — Whether to make the repo private. Defaults toFalse
. This value is ignored if the repo already exist. -
token (
str
, optional) — The token to use to commit to the repo. Defaults to the token saved on the machine. -
allow_patterns (
List[str]
orstr
, optional) — If provided, only files matching at least one pattern are uploaded. -
ignore_patterns (
List[str]
orstr
, optional) — If provided, files matching any of the patterns are not uploaded. -
squash_history (
bool
, optional) — Whether to squash the history of the repo after each commit. Defaults toFalse
. Squashing commits is useful to avoid degraded performances on the repo when it grows too large. -
hf_api (
HfApi
, optional) — The HfApi client to use to commit to the Hub. Can be set with custom settings (user agent, token,…).
Scheduler to upload a local folder to the Hub at regular intervals (e.g. push to hub every 5 minutes).
The scheduler is started when instantiated and run indefinitely. At the end of your script, a last commit is triggered. Checkout the upload guide to learn more about how to use it.
Example:
>>> from pathlib import Path
>>> from huggingface_hub import CommitScheduler
# Scheduler uploads every 10 minutes
>>> csv_path = Path("watched_folder/data.csv")
>>> CommitScheduler(repo_id="test_scheduler", repo_type="dataset", folder_path=csv_path.parent, every=10)
>>> with csv_path.open("a") as f:
... f.write("first line")
# Some time later (...)
>>> with csv_path.open("a") as f:
... f.write("second line")
Push folder to the Hub and return the commit info.
This method is not meant to be called directly. It is run in the background by the scheduler, respecting a queue mechanism to avoid concurrent commits. Making a direct call to the method might lead to concurrency issues.
The default behavior of push_to_hub
is to assume an append-only folder. It lists all files in the folder and
uploads only changed files. If no changes are found, the method returns without committing anything. If you want
to change this behavior, you can inherit from CommitScheduler and override this method. This can be useful
for example to compress data together in a single file before committing. For more details and examples, check
out our integration guide.
Stop the scheduler.
A stopped scheduler cannot be restarted. Mostly for tests purposes.
Trigger a push_to_hub
and return a future.
This method is automatically called every every
minutes. You can also call it manually to trigger a commit
immediately, without waiting for the next scheduled commit.
Token helper
huggingface_hub
stores the authentication information locally so that it may be re-used in subsequent
methods.
It does this using the HfFolder utility, which saves data at the root of the user.
Deletes the token from storage. Does not fail if token does not exist.
Get token or None if not existent.
Note that a token can be also provided using the HUGGING_FACE_HUB_TOKEN
environment variable.
Token is saved in the huggingface home folder. You can configure it by setting
the HF_HOME
environment variable. Previous location was ~/.huggingface/token
.
If token is found in old location but not in new location, it is copied there first.
For more details, see https://github.com/huggingface/huggingface_hub/issues/1232.
Save token, creating folder as needed.
Token is saved in the huggingface home folder. You can configure it by setting
the HF_HOME
environment variable.
Search helpers
Some helpers to filter repositories on the Hub are available in the huggingface_hub
package.
DatasetFilter
class huggingface_hub.DatasetFilter
< source >( author: typing.Optional[str] = None benchmark: typing.Union[typing.List[str], str, NoneType] = None dataset_name: typing.Optional[str] = None language_creators: typing.Union[typing.List[str], str, NoneType] = None language: typing.Union[typing.List[str], str, NoneType] = None multilinguality: typing.Union[typing.List[str], str, NoneType] = None size_categories: typing.Union[typing.List[str], str, NoneType] = None task_categories: typing.Union[typing.List[str], str, NoneType] = None task_ids: typing.Union[typing.List[str], str, NoneType] = None )
A class that converts human-readable dataset search parameters into ones compatible with the REST API. For all parameters capitalization does not matter.
- author (
str
, optional) — A string or list of strings that can be used to identify datasets on the Hub by the original uploader (author or organization), such asfacebook
orhuggingface
. - benchmark (
str
orList
, optional) — A string or list of strings that can be used to identify datasets on the Hub by their official benchmark. - dataset_name (
str
, optional) — A string or list of strings that can be used to identify datasets on the Hub by its name, such asSQAC
orwikineural
- language_creators (
str
orList
, optional) — A string or list of strings that can be used to identify datasets on the Hub with how the data was curated, such ascrowdsourced
ormachine_generated
. - language (
str
orList
, optional) — A string or list of strings representing a two-character language to filter datasets by on the Hub. - multilinguality (
str
orList
, optional) — A string or list of strings representing a filter for datasets that contain multiple languages. - size_categories (
str
orList
, optional) — A string or list of strings that can be used to identify datasets on the Hub by the size of the dataset such as100K<n<1M
or1M<n<10M
. - task_categories (
str
orList
, optional) — A string or list of strings that can be used to identify datasets on the Hub by the designed task, such asaudio_classification
ornamed_entity_recognition
. - task_ids (
str
orList
, optional) — A string or list of strings that can be used to identify datasets on the Hub by the specific task such asspeech_emotion_recognition
orparaphrase
.
</parameters>
Examples:
>>> from huggingface_hub import DatasetFilter
>>> # Using author
>>> new_filter = DatasetFilter(author="facebook")
>>> # Using benchmark
>>> new_filter = DatasetFilter(benchmark="raft")
>>> # Using dataset_name
>>> new_filter = DatasetFilter(dataset_name="wikineural")
>>> # Using language_creator
>>> new_filter = DatasetFilter(language_creator="crowdsourced")
>>> # Using language
>>> new_filter = DatasetFilter(language="en")
>>> # Using multilinguality
>>> new_filter = DatasetFilter(multilinguality="multilingual")
>>> # Using size_categories
>>> new_filter = DatasetFilter(size_categories="100K<n<1M")
>>> # Using task_categories
>>> new_filter = DatasetFilter(task_categories="audio_classification")
>>> # Using task_ids
>>> new_filter = DatasetFilter(task_ids="paraphrase")
ModelFilter
class huggingface_hub.ModelFilter
< source >( author: typing.Optional[str] = None library: typing.Union[typing.List[str], str, NoneType] = None language: typing.Union[typing.List[str], str, NoneType] = None model_name: typing.Optional[str] = None task: typing.Union[typing.List[str], str, NoneType] = None trained_dataset: typing.Union[typing.List[str], str, NoneType] = None tags: typing.Union[typing.List[str], str, NoneType] = None )
Parameters
-
author (
str
, optional) — A string that can be used to identify models on the Hub by the original uploader (author or organization), such asfacebook
orhuggingface
. -
library (
str
orList
, optional) — A string or list of strings of foundational libraries models were originally trained from, such as pytorch, tensorflow, or allennlp. -
language (
str
orList
, optional) — A string or list of strings of languages, both by name and country code, such as “en” or “English” -
model_name (
str
, optional) — A string that contain complete or partial names for models on the Hub, such as “bert” or “bert-base-cased” -
task (
str
orList
, optional) — A string or list of strings of tasks models were designed for, such as: “fill-mask” or “automatic-speech-recognition” -
tags (
str
orList
, optional) — A string tag or a list of tags to filter models on the Hub by, such astext-generation
orspacy
. -
trained_dataset (
str
orList
, optional) — A string tag or a list of string tags of the trained dataset for a model on the Hub.
A class that converts human-readable model search parameters into ones compatible with the REST API. For all parameters capitalization does not matter.
>>> from huggingface_hub import ModelFilter
>>> # For the author_or_organization
>>> new_filter = ModelFilter(author_or_organization="facebook")
>>> # For the library
>>> new_filter = ModelFilter(library="pytorch")
>>> # For the language
>>> new_filter = ModelFilter(language="french")
>>> # For the model_name
>>> new_filter = ModelFilter(model_name="bert")
>>> # For the task
>>> new_filter = ModelFilter(task="text-classification")
>>> # Retrieving tags using the `HfApi.get_model_tags` method
>>> from huggingface_hub import HfApi
>>> api = HfApi()
# To list model tags
>>> api.get_model_tags()
# To list dataset tags
>>> api.get_dataset_tags()
>>> new_filter = ModelFilter(tags="benchmark:raft")
>>> # Related to the dataset
>>> new_filter = ModelFilter(trained_dataset="common_voice")
DatasetSearchArguments
A nested namespace object holding all possible values for properties of datasets currently hosted in the Hub with tab-completion. If a value starts with a number, it will only exist in the dictionary
Example:
>>> args = DatasetSearchArguments()
>>> args.author.huggingface
'huggingface'
>>> args.language.en
'language:en'
DatasetSearchArguments
is a legacy class meant for exploratory purposes only. Its
initialization requires listing all datasets on the Hub which makes it increasingly
slower as the number of repos on the Hub increases.
ModelSearchArguments
A nested namespace object holding all possible values for properties of models currently hosted in the Hub with tab-completion. If a value starts with a number, it will only exist in the dictionary
Example:
>>> args = ModelSearchArguments()
>>> args.author.huggingface
'huggingface'
>>> args.language.en
'en'
ModelSearchArguments
is a legacy class meant for exploratory purposes only. Its
initialization requires listing all models on the Hub which makes it increasingly
slower as the number of repos on the Hub increases.