The huggingface_hub
library offers a range of mixins that can be used as a parent class for your
objects, in order to provide simple uploading and downloading functions.
A generic Hub mixin for machine learning models. Define your own mixin for
any framework by inheriting from this class and overwriting the
_from_pretrained
and _save_pretrained
methods to define custom logic
for saving and loading your classes. See PyTorchModelHubMixin for an
example.
Overwrite this method in subclass to define how to save your model.
( model_id revision cache_dir force_download proxies resume_download local_files_only token **model_kwargs )
Overwrite this method in subclass to define how to load your model from pretrained
( pretrained_model_name_or_path: str force_download: bool = False resume_download: bool = False proxies: typing.Optional[typing.Dict] = None token: typing.Union[str, bool, NoneType] = None cache_dir: typing.Optional[str] = None local_files_only: bool = False **model_kwargs )
Parameters
str
or os.PathLike
) —
Can be either:model id
of a pretrained model
hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level,
like bert-base-uncased
, or namespaced under a
user or organization name, like
dbmdz/bert-base-german-cased
.revision
by appending @
at the end
of model_id simply like this:
dbmdz/bert-base-german-cased@main
Revision is
the specific model version to use. It can be a
branch name, a tag name, or a commit id, since we
use a git-based system for storing models and
other artifacts on huggingface.co, so revision
can be any identifier allowed by git.directory
containing model weights
saved using
save_pretrained,
e.g., ./my_model_directory/
.None
if you are both providing the configuration
and state dictionary (resp. with keyword arguments
config
and state_dict
).bool
, optional, defaults to False
) —
Whether to force the (re-)download of the model weights
and configuration files, overriding the cached versions
if they exist.
bool
, optional, defaults to False
) —
Whether to delete incompletely received files. Will
attempt to resume the download if such a file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or
endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are
used on each request.
str
or bool
, optional) —
The token to use as HTTP bearer authorization for remote
files. If True
, will use the token generated when
running transformers-cli login
(stored in
~/.huggingface
).
Union[str, os.PathLike]
, optional) —
Path to a directory in which a downloaded pretrained
model configuration should be cached if the standard
cache should not be used.
bool
, optional, defaults to False
) —
Whether to only look at local files (i.e., do not try to
download the model).
Dict
, optional) —
model_kwargs will be passed to the model during
initialization
Download and instantiate a model from the Hugging Face Hub.
Passing token=True
is required when you want to use a
private model.
( repo_path_or_name: typing.Optional[str] = None repo_url: typing.Optional[str] = None commit_message: str = 'Add model' organization: typing.Optional[str] = None private: bool = False api_endpoint: typing.Optional[str] = None token: typing.Optional[str] = None git_user: typing.Optional[str] = None git_email: typing.Optional[str] = None config: typing.Optional[dict] = None skip_lfs_files: bool = False repo_id: typing.Optional[str] = None branch: typing.Optional[str] = None create_pr: typing.Optional[bool] = None allow_patterns: typing.Union[typing.List[str], str, NoneType] = None ignore_patterns: typing.Union[typing.List[str], str, NoneType] = None )
Parameters
str
, optional) —
Repository name to which push.
str
, optional) —
Message to commit while pushing.
bool
, optional, defaults to False
) —
Whether the repository created should be private.
str
, optional) —
The API endpoint to use when pushing the model to the hub.
str
, optional) —
The token to use as HTTP bearer authorization for remote files.
If not set, will use the token set when logging in with
transformers-cli login
(stored in ~/.huggingface
).
str
, optional) —
The git branch on which to push the model. This defaults to
the default branch as specified in your repository, which
defaults to "main"
.
boolean
, optional) —
Whether or not to create a Pull Request from branch
with that commit.
Defaults to False
.
dict
, optional) —
Configuration object to be saved alongside the model weights.
List[str]
or str
, optional) —
If provided, only files matching at least one pattern are pushed.
List[str]
or str
, optional) —
If provided, files matching any of the patterns are not pushed.
Upload model checkpoint to the Hub.
Use allow_patterns
and ignore_patterns
to precisely filter which files
should be pushed to the hub. See upload_folder() reference for more details.
( save_directory: typing.Union[str, pathlib.Path] config: typing.Optional[dict] = None push_to_hub: bool = False **kwargs )
Parameters
str
or Path
) —
Specify directory in which you want to save weights.
dict
, optional) —
Specify config (must be dict) in case you want to save
it.
bool
, optional, defaults to False
) —
Whether or not to push your model to the Hugging Face Hub after
saving it. You can specify the repository you want to push to with
repo_id
(will default to the name of save_directory
in your
namespace).
kwargs —
Additional key word arguments passed along to the
~utils.PushToHubMixin.push_to_hub
method.
Save weights in local directory.
Implementation of ModelHubMixin to provide model Hub upload/download
capabilities to PyTorch models. The model is set in evaluation mode by
default using model.eval()
(dropout modules are deactivated). To train
the model, you should first set it back in training mode with
model.train()
.
Example:
>>> import torch
>>> import torch.nn as nn
>>> from huggingface_hub import PyTorchModelHubMixin
>>> class MyModel(nn.Module, PyTorchModelHubMixin):
... def __init__(self):
... super().__init__()
... self.param = nn.Parameter(torch.rand(3, 4))
... self.linear = nn.Linear(4, 5)
... def forward(self, x):
... return self.linear(x + self.param)
>>> model = MyModel()
>>> # Save model weights to local directory
>>> model.save_pretrained("my-awesome-model")
>>> # Push model weights to the Hub
>>> model.push_to_hub("my-awesome-model")
>>> # Download and initialize weights from the Hub
>>> model = MyModel.from_pretrained("username/my-awesome-model")
Implementation of ModelHubMixin to provide model Hub upload/download capabilities to Keras models.
>>> import tensorflow as tf
>>> from huggingface_hub import KerasModelHubMixin
>>> class MyModel(tf.keras.Model, KerasModelHubMixin):
... def __init__(self, **kwargs):
... super().__init__()
... self.config = kwargs.pop("config", None)
... self.dummy_inputs = ...
... self.layer = ...
... def call(self, *args):
... return ...
>>> # Initialize and compile the model as you normally would
>>> model = MyModel()
>>> model.compile(...)
>>> # Build the graph by training it or passing dummy inputs
>>> _ = model(model.dummy_inputs)
>>> # Save model weights to local directory
>>> model.save_pretrained("my-awesome-model")
>>> # Push model weights to the Hub
>>> model.push_to_hub("my-awesome-model")
>>> # Download and initialize weights from the Hub
>>> model = MyModel.from_pretrained("username/super-cool-model")
( *args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:model id
of a pretrained model hosted inside a
model repo on huggingface.co. Valid model ids can be located
at the root-level, like bert-base-uncased
, or namespaced
under a user or organization name, like
dbmdz/bert-base-german-cased
.revision
by appending @
at the end of model_id
simply like this: dbmdz/bert-base-german-cased@main
Revision
is the specific model version to use. It can be a branch name,
a tag name, or a commit id, since we use a git-based system
for storing models and other artifacts on huggingface.co, so
revision
can be any identifier allowed by git.directory
containing model weights saved using
save_pretrained, e.g.,
./my_model_directory/
.None
if you are both providing the configuration and state
dictionary (resp. with keyword arguments config
and
state_dict
).bool
, optional, defaults to False
) —
Whether to force the (re-)download of the model weights and
configuration files, overriding the cached versions if they exist.
bool
, optional, defaults to False
) —
Whether to delete incompletely received files. Will attempt to
resume the download if such a file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g.,
{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The
proxies are used on each request.
str
or bool
, optional) —
The token to use as HTTP bearer authorization for remote files. If
True
, will use the token generated when running transformers-cli login
(stored in ~/.huggingface
).
Union[str, os.PathLike]
, optional) —
Path to a directory in which a downloaded pretrained model
configuration should be cached if the standard cache should not be
used.
bool
, optional, defaults to False
) —
Whether to only look at local files (i.e., do not try to download
the model).
Dict
, optional) —
model_kwargs will be passed to the model during initialization
Instantiate a pretrained Keras model from a pre-trained model from the Hub.
The model is expected to be in SavedModel
format.
Passing token=True
is required when you want to use a private
model.
( model repo_path_or_name: typing.Optional[str] = None repo_url: typing.Optional[str] = None log_dir: typing.Optional[str] = None commit_message: str = 'Add Keras model' organization: typing.Optional[str] = None private: bool = False api_endpoint: typing.Optional[str] = None git_user: typing.Optional[str] = None git_email: typing.Optional[str] = None config: typing.Optional[dict] = None include_optimizer: bool = False tags: typing.Union[list, str, NoneType] = None plot_model: bool = True token: typing.Optional[str] = None repo_id: typing.Optional[str] = None branch: typing.Optional[str] = None create_pr: typing.Optional[bool] = None allow_patterns: typing.Union[typing.List[str], str, NoneType] = None ignore_patterns: typing.Union[typing.List[str], str, NoneType] = None **model_save_kwargs )
Parameters
Keras.Model
) —
The Keras
model
you’d like to push to the Hub. The model must be compiled and built.
str
) —
Repository name to which push
str
, optional, defaults to “Add Keras model”) —
Message to commit while pushing.
bool
, optional, defaults to False
) —
Whether the repository created should be private.
str
, optional) —
The API endpoint to use when pushing the model to the hub.
str
, optional) —
The token to use as HTTP bearer authorization for remote files. If
not set, will use the token set when logging in with
huggingface-cli login
(stored in ~/.huggingface
).
str
, optional) —
The git branch on which to push the model. This defaults to
the default branch as specified in your repository, which
defaults to "main"
.
boolean
, optional) —
Whether or not to create a Pull Request from branch
with that commit.
Defaults to False
.
dict
, optional) —
Configuration object to be saved alongside the model weights.
List[str]
or str
, optional) —
If provided, only files matching at least one pattern are pushed.
List[str]
or str
, optional) —
If provided, files matching any of the patterns are not pushed.
str
, optional) —
TensorBoard logging directory to be pushed. The Hub automatically
hosts and displays a TensorBoard instance if log files are included
in the repository.
bool
, optional, defaults to False
) —
Whether or not to include optimizer during serialization.
list
, str
], optional) —
List of tags that are related to model or string of a single tag. See example tags
here.
bool
, optional, defaults to True
) —
Setting this to True
will plot the model and put it in the model
card. Requires graphviz and pydot to be installed.
dict
, optional) —
model_save_kwargs will be passed to
tf.keras.models.save_model()
.
Upload model checkpoint or tokenizer files to the Hub while synchronizing a
local clone of the repo in repo_path_or_name
.
Use allow_patterns
and ignore_patterns
to precisely filter which files should be
pushed to the hub. See upload_folder() reference for more details.
( model save_directory: typing.Union[str, pathlib.Path] config: typing.Union[typing.Dict[str, typing.Any], NoneType] = None include_optimizer: bool = False plot_model: bool = True tags: typing.Union[list, str, NoneType] = None **model_save_kwargs )
Parameters
Keras.Model
) —
The Keras
model
you’d like to save. The model must be compiled and built.
str
or Path
) —
Specify directory in which you want to save the Keras model.
dict
, optional) —
Configuration object to be saved alongside the model weights.
bool
, optional, defaults to False
) —
Whether or not to include optimizer in serialization.
bool
, optional, defaults to True
) —
Setting this to True
will plot the model and put it in the model
card. Requires graphviz and pydot to be installed.
str
,list
], optional) —
List of tags that are related to model or string of a single tag. See example tags
here.
dict
, optional) —
model_save_kwargs will be passed to
tf.keras.models.save_model()
.
Saves a Keras model to save_directory in SavedModel format. Use this if you’re using the Functional or Sequential APIs.
( repo_id: str revision: typing.Optional[str] = None )
Parameters
str
) —
The location where the pickled fastai.Learner is. It can be either of the two:revision
by appending @
at the end of repo_id
. E.g.: dbmdz/bert-base-german-cased@main
.
Revision is the specific model version to use. Since we use a git-based system for storing models and other
artifacts on the Hugging Face Hub, it can be a branch name, a tag name, or a commit id.repo_id
would be a directory containing the pickle and a pyproject.toml
indicating the fastai and fastcore versions used to build the fastai.Learner
. E.g.: ./my_model_directory/
.str
, optional) —
Revision at which the repo’s files are downloaded. See documentation of snapshot_download
.
Load pretrained fastai model from the Hub or from a local directory.
( learner repo_id: str commit_message: str = 'Add model' private: bool = False token: typing.Optional[str] = None config: typing.Optional[dict] = None branch: typing.Optional[str] = None create_pr: typing.Optional[bool] = None allow_patterns: typing.Union[typing.List[str], str, NoneType] = None ignore_patterns: typing.Union[typing.List[str], str, NoneType] = None api_endpoint: typing.Optional[str] = None git_user: typing.Optional[str] = None git_email: typing.Optional[str] = None )
Parameters
"add model"
.
None
, the token will be asked by a prompt.
Upload learner checkpoint files to the Hub.
Use allow_patterns and ignore_patterns to precisely filter which files should be pushed to the hub. See [upload_folder] reference for more details.
Raises the following error: