text
stringlengths
3
7.31k
source
stringclasses
40 values
file_type
stringclasses
1 value
id
stringlengths
3
6
```python Write a DDUF file from an iterable of entries. This is a lower-level helper than [`export_folder_as_dduf`] that allows more flexibility when serializing data. In particular, you don't need to save the data on disk before exporting it in the DDUF file. Args: dduf_path (`str` or `os.PathLike`): The path to the DDUF file to write. entries (`Iterable[Tuple[str, Union[str, Path, bytes]]]`): An iterable of entries to write in the DDUF file. Each entry is a tuple with the filename and the content. The filename should be the path to the file in the DDUF archive. The content can be a string or a pathlib.Path representing a path to a file on the local disk or directly the content as bytes. Raises: - [`DDUFExportError`]: If anything goes wrong during the export (e.g. invalid entry name, missing 'model_index.json', etc.). Example: ```python
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_5
>>> from huggingface_hub import export_entries_as_dduf >>> export_entries_as_dduf( ... dduf_path="stable-diffusion-v1-4-FP16.dduf", ... entries=[ # List entries to add to the DDUF file (here, only FP16 weights) ... ("model_index.json", "path/to/model_index.json"), ... ("vae/config.json", "path/to/vae/config.json"), ... ("vae/diffusion_pytorch_model.fp16.safetensors", "path/to/vae/diffusion_pytorch_model.fp16.safetensors"), ... ("text_encoder/config.json", "path/to/text_encoder/config.json"), ... ("text_encoder/model.fp16.safetensors", "path/to/text_encoder/model.fp16.safetensors"), ... # ... add more entries here ... ] ... ) ``` ```python
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_6
>>> from diffusers import DiffusionPipeline >>> from typing import Generator, Tuple >>> import safetensors.torch >>> from huggingface_hub import export_entries_as_dduf >>> pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") ... # ... do some work with the pipeline >>> def as_entries(pipe: DiffusionPipeline) -> Generator[Tuple[str, bytes], None, None]: ... # Build an generator that yields the entries to add to the DDUF file. ... # The first element of the tuple is the filename in the DDUF archive (must use UNIX separator!). The second element is the content of the file. ... # Entries will be evaluated lazily when the DDUF file is created (only 1 entry is loaded in memory at a time) ... yield "vae/config.json", pipe.vae.to_json_string().encode() ... yield "vae/diffusion_pytorch_model.safetensors", safetensors.torch.save(pipe.vae.state_dict()) ... yield "text_encoder/config.json", pipe.text_encoder.config.to_json_string().encode() ... yield "text_encoder/model.safetensors", safetensors.torch.save(pipe.text_encoder.state_dict()) ... # ... add more entries here >>> export_entries_as_dduf(dduf_path="stable-diffusion-v1-4.dduf", entries=as_entries(pipe)) ``` ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_7
```python Export a folder as a DDUF file. AUses [`export_entries_as_dduf`] under the hood. Args: dduf_path (`str` or `os.PathLike`): The path to the DDUF file to write. folder_path (`str` or `os.PathLike`): The path to the folder containing the diffusion model. Example: ```python >>> from huggingface_hub import export_folder_as_dduf >>> export_folder_as_dduf(dduf_path="FLUX.1-dev.dduf", folder_path="path/to/FLUX.1-dev") ``` ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_8
```python Read a DDUF file and return a dictionary of entries. Only the metadata is read, the data is not loaded in memory. Args: dduf_path (`str` or `os.PathLike`): The path to the DDUF file to read. Returns: `Dict[str, DDUFEntry]`: A dictionary of [`DDUFEntry`] indexed by filename. Raises: - [`DDUFCorruptedFileError`]: If the DDUF file is corrupted (i.e. doesn't follow the DDUF format). Example: ```python >>> import json >>> import safetensors.torch >>> from huggingface_hub import read_dduf_file
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_9
>>> dduf_entries = read_dduf_file("FLUX.1-dev.dduf")
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_10
>>> dduf_entries["model_index.json"] DDUFEntry(filename='model_index.json', offset=66, length=587)
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_11
>>> json.loads(dduf_entries["model_index.json"].read_text()) {'_class_name': 'FluxPipeline', '_diffusers_version': '0.32.0.dev0', '_name_or_path': 'black-forest-labs/FLUX.1-dev', ...
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_12
>>> with dduf_entries["vae/diffusion_pytorch_model.safetensors"].as_mmap() as mm: ... state_dict = safetensors.torch.load(mm) ``` ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_13
```python Object representing a file entry in a DDUF file. See [`read_dduf_file`] for how to read a DDUF file. Attributes: filename (str): The name of the file in the DDUF archive. offset (int): The offset of the file in the DDUF archive. length (int): The length of the file in the DDUF archive. dduf_path (str): The path to the DDUF archive (for internal use). ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_14
```python Base exception for errors related to the DDUF format. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_15
```python Exception thrown when the DDUF file is corrupted. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_16
```python Base exception for errors during DDUF export. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_17
```python Exception thrown when the entry name is invalid. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_18
The main helper of the `serialization` module takes a torch `nn.Module` as input and saves it to disk. It handles the logic to save shared tensors (see [safetensors explanation](https://huggingface.co/docs/safetensors/torch_shared_tensors)) as well as logic to split the state dictionary into shards, using [`split_torch_state_dict_into_shards`] under the hood. At the moment, only `torch` framework is supported. If you want to save a state dictionary (e.g. a mapping between layer names and related tensors) instead of a `nn.Module`, you can use [`save_torch_state_dict`] which provides the same features. This is useful for example if you want to apply custom logic to the state dict before saving it.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_19
```python Saves a given torch model to disk, handling sharding and shared tensors issues. See also [`save_torch_state_dict`] to save a state dict with more flexibility. For more information about tensor sharing, check out [this guide](https://huggingface.co/docs/safetensors/torch_shared_tensors). The model state dictionary is split into shards so that each shard is smaller than a given size. The shards are saved in the `save_directory` with the given `filename_pattern`. If the model is too big to fit in a single shard, an index file is saved in the `save_directory` to indicate where each tensor is saved. This helper uses [`split_torch_state_dict_into_shards`] under the hood. If `safe_serialization` is `True`, the shards are saved as safetensors (the default). Otherwise, the shards are saved as pickle. Before saving the model, the `save_directory` is cleaned from any previous shard files. <Tip warning={true}> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a size greater than `max_shard_size`. </Tip> <Tip warning={true}> If your model is a `transformers.PreTrainedModel`, you should pass `model._tied_weights_keys` as `shared_tensors_to_discard` to properly handle shared tensors saving. This ensures the correct duplicate tensors are discarded during saving. </Tip> Args: model (`torch.nn.Module`): The model to save on disk. save_directory (`str` or `Path`): The directory in which the model will be saved. filename_pattern (`str`, *optional*): The pattern to generate the files names in which the model will be saved. Pattern must be a string that can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix` Defaults to `"model{suffix}.safetensors"` or `pytorch_model{suffix}.bin` depending on `safe_serialization` parameter. force_contiguous (`boolean`, *optional*): Forcing the state_dict to be saved as contiguous tensors. This has no effect on the correctness of the model, but it could potentially change performance if the layout of the tensor was chosen specifically for that reason. Defaults to `True`. max_shard_size (`int` or `str`, *optional*): The maximum size of each shard, in bytes. Defaults to 5GB. metadata (`Dict[str, str]`, *optional*): Extra information to save along with the model. Some metadata will be added for each dropped tensors. This information will not be enough to recover the entire shared structure but might help understanding things. safe_serialization (`bool`, *optional*): Whether to save as safetensors, which is the default behavior. If `False`, the shards are saved as pickle. Safe serialization is recommended for security reasons. Saving as pickle is deprecated and will be removed in a future version. is_main_process (`bool`, *optional*): Whether the process calling this is the main process or not. Useful when in distributed training like TPUs and need to call this function from all processes. In this case, set `is_main_process=True` only on the main process to avoid race conditions. Defaults to True. shared_tensors_to_discard (`List[str]`, *optional*): List of tensor names to drop when saving shared tensors. If not provided and shared tensors are detected, it will drop the first name alphabetically. Example: ```py >>> from huggingface_hub import save_torch_model >>> model = ... # A PyTorch model
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_20
>>> save_torch_model(model, "path/to/folder")
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_21
>>> from huggingface_hub import load_torch_model # TODO >>> load_torch_model(model, "path/to/folder") >>> ``` ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_22
```python Save a model state dictionary to the disk, handling sharding and shared tensors issues. See also [`save_torch_model`] to directly save a PyTorch model. For more information about tensor sharing, check out [this guide](https://huggingface.co/docs/safetensors/torch_shared_tensors). The model state dictionary is split into shards so that each shard is smaller than a given size. The shards are saved in the `save_directory` with the given `filename_pattern`. If the model is too big to fit in a single shard, an index file is saved in the `save_directory` to indicate where each tensor is saved. This helper uses [`split_torch_state_dict_into_shards`] under the hood. If `safe_serialization` is `True`, the shards are saved as safetensors (the default). Otherwise, the shards are saved as pickle. Before saving the model, the `save_directory` is cleaned from any previous shard files. <Tip warning={true}> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a size greater than `max_shard_size`. </Tip> <Tip warning={true}> If your model is a `transformers.PreTrainedModel`, you should pass `model._tied_weights_keys` as `shared_tensors_to_discard` to properly handle shared tensors saving. This ensures the correct duplicate tensors are discarded during saving. </Tip> Args: state_dict (`Dict[str, torch.Tensor]`): The state dictionary to save. save_directory (`str` or `Path`): The directory in which the model will be saved. filename_pattern (`str`, *optional*): The pattern to generate the files names in which the model will be saved. Pattern must be a string that can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix` Defaults to `"model{suffix}.safetensors"` or `pytorch_model{suffix}.bin` depending on `safe_serialization` parameter. force_contiguous (`boolean`, *optional*): Forcing the state_dict to be saved as contiguous tensors. This has no effect on the correctness of the model, but it could potentially change performance if the layout of the tensor was chosen specifically for that reason. Defaults to `True`. max_shard_size (`int` or `str`, *optional*): The maximum size of each shard, in bytes. Defaults to 5GB. metadata (`Dict[str, str]`, *optional*): Extra information to save along with the model. Some metadata will be added for each dropped tensors. This information will not be enough to recover the entire shared structure but might help understanding things. safe_serialization (`bool`, *optional*): Whether to save as safetensors, which is the default behavior. If `False`, the shards are saved as pickle. Safe serialization is recommended for security reasons. Saving as pickle is deprecated and will be removed in a future version. is_main_process (`bool`, *optional*): Whether the process calling this is the main process or not. Useful when in distributed training like TPUs and need to call this function from all processes. In this case, set `is_main_process=True` only on the main process to avoid race conditions. Defaults to True. shared_tensors_to_discard (`List[str]`, *optional*): List of tensor names to drop when saving shared tensors. If not provided and shared tensors are detected, it will drop the first name alphabetically. Example: ```py >>> from huggingface_hub import save_torch_state_dict >>> model = ... # A PyTorch model
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_23
>>> state_dict = model_to_save.state_dict() >>> save_torch_state_dict(state_dict, "path/to/folder") ``` ``` The `serialization` module also contains low-level helpers to split a state dictionary into several shards, while creating a proper index in the process. These helpers are available for `torch` and `tensorflow` tensors and are designed to be easily extended to any other ML frameworks.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_24
```python Split a model state dictionary in shards so that each shard is smaller than a given size. The shards are determined by iterating through the `state_dict` in the order of its keys. There is no optimization made to make each shard as close as possible to the maximum size passed. For example, if the limit is 10GB and we have tensors of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], [6+2+2GB] and not [6+2+2GB], [6+2GB], [6GB]. <Tip warning={true}> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a size greater than `max_shard_size`. </Tip> Args: state_dict (`Dict[str, Tensor]`): The state dictionary to save. filename_pattern (`str`, *optional*): The pattern to generate the files names in which the model will be saved. Pattern must be a string that can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix` Defaults to `"tf_model{suffix}.h5"`. max_shard_size (`int` or `str`, *optional*): The maximum size of each shard, in bytes. Defaults to 5GB. Returns: [`StateDictSplit`]: A `StateDictSplit` object containing the shards and the index to retrieve them. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_25
```python Split a model state dictionary in shards so that each shard is smaller than a given size. The shards are determined by iterating through the `state_dict` in the order of its keys. There is no optimization made to make each shard as close as possible to the maximum size passed. For example, if the limit is 10GB and we have tensors of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], [6+2+2GB] and not [6+2+2GB], [6+2GB], [6GB]. <Tip> To save a model state dictionary to the disk, see [`save_torch_state_dict`]. This helper uses `split_torch_state_dict_into_shards` under the hood. </Tip> <Tip warning={true}> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a size greater than `max_shard_size`. </Tip> Args: state_dict (`Dict[str, torch.Tensor]`): The state dictionary to save. filename_pattern (`str`, *optional*): The pattern to generate the files names in which the model will be saved. Pattern must be a string that can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix` Defaults to `"model{suffix}.safetensors"`. max_shard_size (`int` or `str`, *optional*): The maximum size of each shard, in bytes. Defaults to 5GB. Returns: [`StateDictSplit`]: A `StateDictSplit` object containing the shards and the index to retrieve them. Example: ```py >>> import json >>> import os >>> from safetensors.torch import save_file as safe_save_file >>> from huggingface_hub import split_torch_state_dict_into_shards >>> def save_state_dict(state_dict: Dict[str, torch.Tensor], save_directory: str): ... state_dict_split = split_torch_state_dict_into_shards(state_dict) ... for filename, tensors in state_dict_split.filename_to_tensors.items(): ... shard = {tensor: state_dict[tensor] for tensor in tensors} ... safe_save_file( ... shard, ... os.path.join(save_directory, filename), ... metadata={"format": "pt"}, ... ) ... if state_dict_split.is_sharded: ... index = { ... "metadata": state_dict_split.metadata, ... "weight_map": state_dict_split.tensor_to_filename, ... } ... with open(os.path.join(save_directory, "model.safetensors.index.json"), "w") as f: ... f.write(json.dumps(index, indent=2)) ``` ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_26
This is the underlying factory from which each framework-specific helper is derived. In practice, you are not expected to use this factory directly except if you need to adapt it to a framework that is not yet supported. If that is the case, please let us know by [opening a new issue](https://github.com/huggingface/huggingface_hub/issues/new) on the `huggingface_hub` repo.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_27
```python Split a model state dictionary in shards so that each shard is smaller than a given size. The shards are determined by iterating through the `state_dict` in the order of its keys. There is no optimization made to make each shard as close as possible to the maximum size passed. For example, if the limit is 10GB and we have tensors of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], [6+2+2GB] and not [6+2+2GB], [6+2GB], [6GB]. <Tip warning={true}> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a size greater than `max_shard_size`. </Tip> Args: state_dict (`Dict[str, Tensor]`): The state dictionary to save. get_storage_size (`Callable[[Tensor], int]`): A function that returns the size of a tensor when saved on disk in bytes. get_storage_id (`Callable[[Tensor], Optional[Any]]`, *optional*): A function that returns a unique identifier to a tensor storage. Multiple different tensors can share the same underlying storage. This identifier is guaranteed to be unique and constant for this tensor's storage during its lifetime. Two tensor storages with non-overlapping lifetimes may have the same id. filename_pattern (`str`, *optional*): The pattern to generate the files names in which the model will be saved. Pattern must be a string that can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix` max_shard_size (`int` or `str`, *optional*): The maximum size of each shard, in bytes. Defaults to 5GB. Returns: [`StateDictSplit`]: A `StateDictSplit` object containing the shards and the index to retrieve them. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_28
The loading helpers support both single-file and sharded checkpoints in either safetensors or pickle format. [`load_torch_model`] takes a `nn.Module` and a checkpoint path (either a single file or a directory) as input and load the weights into the model.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_29
```python Load a checkpoint into a model, handling both sharded and non-sharded checkpoints. Args: model (`torch.nn.Module`): The model in which to load the checkpoint. checkpoint_path (`str` or `os.PathLike`): Path to either the checkpoint file or directory containing the checkpoint(s). strict (`bool`, *optional*, defaults to `False`): Whether to strictly enforce that the keys in the model state dict match the keys in the checkpoint. safe (`bool`, *optional*, defaults to `True`): If `safe` is True, the safetensors files will be loaded. If `safe` is False, the function will first attempt to load safetensors files if they are available, otherwise it will fall back to loading pickle files. `filename_pattern` parameter takes precedence over `safe` parameter. weights_only (`bool`, *optional*, defaults to `False`): If True, only loads the model weights without optimizer states and other metadata. Only supported in PyTorch >= 1.13. map_location (`str` or `torch.device`, *optional*): A `torch.device` object, string or a dict specifying how to remap storage locations. It indicates the location where all tensors should be loaded. mmap (`bool`, *optional*, defaults to `False`): Whether to use memory-mapped file loading. Memory mapping can improve loading performance for large models in PyTorch >= 2.1.0 with zipfile-based checkpoints. filename_pattern (`str`, *optional*): The pattern to look for the index file. Pattern must be a string that can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix` Defaults to `"model{suffix}.safetensors"`. Returns: `NamedTuple`: A named tuple with `missing_keys` and `unexpected_keys` fields. - `missing_keys` is a list of str containing the missing keys, i.e. keys that are in the model but not in the checkpoint. - `unexpected_keys` is a list of str containing the unexpected keys, i.e. keys that are in the checkpoint but not in the model. Raises: [`FileNotFoundError`](https://docs.python.org/3/library/exceptions.html#FileNotFoundError) If the checkpoint file or directory does not exist. [`ImportError`](https://docs.python.org/3/library/exceptions.html#ImportError) If safetensors or torch is not installed when trying to load a .safetensors file or a PyTorch checkpoint respectively. [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If the checkpoint path is invalid or if the checkpoint format cannot be determined. Example: ```python >>> from huggingface_hub import load_torch_model >>> model = ... # A PyTorch model >>> load_torch_model(model, "path/to/checkpoint") ``` ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_30
```python Loads a checkpoint file, handling both safetensors and pickle checkpoint formats. Args: checkpoint_file (`str` or `os.PathLike`): Path to the checkpoint file to load. Can be either a safetensors or pickle (`.bin`) checkpoint. map_location (`str` or `torch.device`, *optional*): A `torch.device` object, string or a dict specifying how to remap storage locations. It indicates the location where all tensors should be loaded. weights_only (`bool`, *optional*, defaults to `False`): If True, only loads the model weights without optimizer states and other metadata. Only supported for pickle (`.bin`) checkpoints with PyTorch >= 1.13. Has no effect when loading safetensors files. mmap (`bool`, *optional*, defaults to `False`): Whether to use memory-mapped file loading. Memory mapping can improve loading performance for large models in PyTorch >= 2.1.0 with zipfile-based checkpoints. Has no effect when loading safetensors files, as the `safetensors` library uses memory mapping by default. Returns: `Union[Dict[str, "torch.Tensor"], Any]`: The loaded checkpoint. - For safetensors files: always returns a dictionary mapping parameter names to tensors. - For pickle files: returns any Python object that was pickled (commonly a state dict, but could be an entire model, optimizer state, or any other Python object). Raises: [`FileNotFoundError`](https://docs.python.org/3/library/exceptions.html#FileNotFoundError) If the checkpoint file does not exist. [`ImportError`](https://docs.python.org/3/library/exceptions.html#ImportError) If safetensors or torch is not installed when trying to load a .safetensors file or a PyTorch checkpoint respectively. [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) If the checkpoint file format is invalid or if git-lfs files are not properly downloaded. [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If the checkpoint file path is empty or invalid. Example: ```python >>> from huggingface_hub import load_state_dict_from_file
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_31
>>> state_dict = load_state_dict_from_file("path/to/model.bin", map_location="cpu") >>> model.load_state_dict(state_dict)
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_32
>>> state_dict = load_state_dict_from_file("path/to/model.safetensors") >>> model.load_state_dict(state_dict) ``` ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_33
```python Return unique identifier to a tensor storage. Multiple different tensors can share the same underlying storage. This identifier is guaranteed to be unique and constant for this tensor's storage during its lifetime. Two tensor storages with non-overlapping lifetimes may have the same id. In the case of meta tensors, we return None since we can't tell if they share the same storage. Taken from https://github.com/huggingface/transformers/blob/1ecf5f7c982d761b4daaa96719d162c324187c64/src/transformers/pytorch_utils.py#L278. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_34
```python Taken from https://github.com/huggingface/safetensors/blob/08db34094e9e59e2f9218f2df133b7b4aaff5a99/bindings/python/py_src/safetensors/torch.py#L31C1-L41C59 ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
10_35
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
11_0
TensorBoard is a visualization toolkit for machine learning experimentation. TensorBoard allows tracking and visualizing metrics such as loss and accuracy, visualizing the model graph, viewing histograms, displaying images and much more. TensorBoard is well integrated with the Hugging Face Hub. The Hub automatically detects TensorBoard traces (such as `tfevents`) when pushed to the Hub which starts an instance to visualize them. To get more information about TensorBoard integration on the Hub, check out [this guide](https://huggingface.co/docs/hub/tensorboard). To benefit from this integration, `huggingface_hub` provides a custom logger to push logs to the Hub. It works as a drop-in replacement for [SummaryWriter](https://tensorboardx.readthedocs.io/en/latest/tensorboard.html) with no extra code needed. Traces are still saved locally and a background job push them to the Hub at regular interval.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
11_1
```python Wrapper around the tensorboard's `SummaryWriter` to push training logs to the Hub. Data is logged locally and then pushed to the Hub asynchronously. Pushing data to the Hub is done in a separate thread to avoid blocking the training script. In particular, if the upload fails for any reason (e.g. a connection issue), the main script will not be interrupted. Data is automatically pushed to the Hub every `commit_every` minutes (default to every 5 minutes). <Tip warning={true}> `HFSummaryWriter` is experimental. Its API is subject to change in the future without prior notice. </Tip> Args: repo_id (`str`): The id of the repo to which the logs will be pushed. logdir (`str`, *optional*): The directory where the logs will be written. If not specified, a local directory will be created by the underlying `SummaryWriter` object. commit_every (`int` or `float`, *optional*): The frequency (in minutes) at which the logs will be pushed to the Hub. Defaults to 5 minutes. squash_history (`bool`, *optional*): Whether to squash the history of the repo after each commit. Defaults to `False`. Squashing commits is useful to avoid degraded performances on the repo when it grows too large. repo_type (`str`, *optional*): The type of the repo to which the logs will be pushed. Defaults to "model". repo_revision (`str`, *optional*): The revision of the repo to which the logs will be pushed. Defaults to "main". repo_private (`bool`, *optional*): Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists. path_in_repo (`str`, *optional*): The path to the folder in the repo where the logs will be pushed. Defaults to "tensorboard/". repo_allow_patterns (`List[str]` or `str`, *optional*): A list of patterns to include in the upload. Defaults to `"*.tfevents.*"`. Check out the [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder) for more details. repo_ignore_patterns (`List[str]` or `str`, *optional*): A list of patterns to exclude in the upload. Check out the [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder) for more details. token (`str`, *optional*): Authentication token. Will default to the stored token. See https://huggingface.co/settings/token for more details kwargs: Additional keyword arguments passed to `SummaryWriter`. Examples: ```diff
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
11_2
- from torch.utils.tensorboard import SummaryWriter + from huggingface_hub import HFSummaryWriter import numpy as np - writer = SummaryWriter() + writer = HFSummaryWriter(repo_id="username/my-trained-model") for n_iter in range(100): writer.add_scalar('Loss/train', np.random.random(), n_iter) writer.add_scalar('Loss/test', np.random.random(), n_iter) writer.add_scalar('Accuracy/train', np.random.random(), n_iter) writer.add_scalar('Accuracy/test', np.random.random(), n_iter) ``` ```py >>> from huggingface_hub import HFSummaryWriter
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
11_3
>>> with HFSummaryWriter(repo_id="test_hf_logger", commit_every=15) as logger: ... logger.add_scalar("a", 1) ... logger.add_scalar("b", 2) ``` ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
11_4
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
12_0
Inference is the process of using a trained model to make predictions on new data. As this process can be compute-intensive, running on a dedicated server can be an interesting option. The `huggingface_hub` library provides an easy way to call a service that runs inference for hosted models. There are several services you can connect to: - [Inference API](https://huggingface.co/docs/api-inference/index): a service that allows you to run accelerated inference on Hugging Face's infrastructure for free. This service is a fast way to get started, test different models, and prototype AI products. - [Inference Endpoints](https://huggingface.co/inference-endpoints): a product to easily deploy models to production. Inference is run by Hugging Face in a dedicated, fully managed infrastructure on a cloud provider of your choice. These services can be called with the [`InferenceClient`] object. Please refer to [this guide](../guides/inference) for more information on how to use it.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
12_1
```python Initialize a new Inference Client. [`InferenceClient`] aims to provide a unified experience to perform inference. The client can be used seamlessly with either the (free) Inference API or self-hosted Inference Endpoints. Args: model (`str`, `optional`): The model to run inference with. Can be a model id hosted on the Hugging Face Hub, e.g. `meta-llama/Meta-Llama-3-8B-Instruct` or a URL to a deployed Inference Endpoint. Defaults to None, in which case a recommended model is automatically selected for the task. Note: for better compatibility with OpenAI's client, `model` has been aliased as `base_url`. Those 2 arguments are mutually exclusive. If using `base_url` for chat completion, the `/chat/completions` suffix path will be appended to the base URL (see the [TGI Messages API](https://huggingface.co/docs/text-generation-inference/en/messages_api) documentation for details). When passing a URL as `model`, the client will not append any suffix path to it. token (`str` or `bool`, *optional*): Hugging Face token. Will default to the locally saved token if not provided. Pass `token=False` if you don't want to send your token to the server. Note: for better compatibility with OpenAI's client, `token` has been aliased as `api_key`. Those 2 arguments are mutually exclusive and have the exact same behavior. timeout (`float`, `optional`): The maximum number of seconds to wait for a response from the server. Loading a new model in Inference API can take up to several minutes. Defaults to None, meaning it will loop until the server is available. headers (`Dict[str, str]`, `optional`): Additional headers to send to the server. By default only the authorization and user-agent headers are sent. Values in this dictionary will override the default values. cookies (`Dict[str, str]`, `optional`): Additional cookies to send to the server. proxies (`Any`, `optional`): Proxies to use for the request. base_url (`str`, `optional`): Base URL to run inference. This is a duplicated argument from `model` to make [`InferenceClient`] follow the same pattern as `openai.OpenAI` client. Cannot be used if `model` is set. Defaults to None. api_key (`str`, `optional`): Token to use for authentication. This is a duplicated argument from `token` to make [`InferenceClient`] follow the same pattern as `openai.OpenAI` client. Cannot be used if `token` is set. Defaults to None. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
12_2
An async version of the client is also provided, based on `asyncio` and `aiohttp`. To use it, you can either install `aiohttp` directly or use the `[inference]` extra: ```sh pip install --upgrade huggingface_hub[inference] # or # pip install aiohttp ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
12_3
```python Initialize a new Inference Client. [`InferenceClient`] aims to provide a unified experience to perform inference. The client can be used seamlessly with either the (free) Inference API or self-hosted Inference Endpoints. Args: model (`str`, `optional`): The model to run inference with. Can be a model id hosted on the Hugging Face Hub, e.g. `meta-llama/Meta-Llama-3-8B-Instruct` or a URL to a deployed Inference Endpoint. Defaults to None, in which case a recommended model is automatically selected for the task. Note: for better compatibility with OpenAI's client, `model` has been aliased as `base_url`. Those 2 arguments are mutually exclusive. If using `base_url` for chat completion, the `/chat/completions` suffix path will be appended to the base URL (see the [TGI Messages API](https://huggingface.co/docs/text-generation-inference/en/messages_api) documentation for details). When passing a URL as `model`, the client will not append any suffix path to it. token (`str` or `bool`, *optional*): Hugging Face token. Will default to the locally saved token if not provided. Pass `token=False` if you don't want to send your token to the server. Note: for better compatibility with OpenAI's client, `token` has been aliased as `api_key`. Those 2 arguments are mutually exclusive and have the exact same behavior. timeout (`float`, `optional`): The maximum number of seconds to wait for a response from the server. Loading a new model in Inference API can take up to several minutes. Defaults to None, meaning it will loop until the server is available. headers (`Dict[str, str]`, `optional`): Additional headers to send to the server. By default only the authorization and user-agent headers are sent. Values in this dictionary will override the default values. cookies (`Dict[str, str]`, `optional`): Additional cookies to send to the server. trust_env ('bool', 'optional'): Trust environment settings for proxy configuration if the parameter is `True` (`False` by default). proxies (`Any`, `optional`): Proxies to use for the request. base_url (`str`, `optional`): Base URL to run inference. This is a duplicated argument from `model` to make [`InferenceClient`] follow the same pattern as `openai.OpenAI` client. Cannot be used if `model` is set. Defaults to None. api_key (`str`, `optional`): Token to use for authentication. This is a duplicated argument from `token` to make [`InferenceClient`] follow the same pattern as `openai.OpenAI` client. Cannot be used if `token` is set. Defaults to None. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
12_4
```python Error raised when a model is unavailable or the request times out. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
12_5
```python This Dataclass represents the model status in the Hugging Face Inference API. Args: loaded (`bool`): If the model is currently loaded into Hugging Face's InferenceAPI. Models are loaded on-demand, leading to the user's first request taking longer. If a model is loaded, you can be assured that it is in a healthy state. state (`str`): The current state of the model. This can be 'Loaded', 'Loadable', 'TooBig'. If a model's state is 'Loadable', it's not too big and has a supported backend. Loadable models are automatically loaded when the user first requests inference on the endpoint. This means it is transparent for the user to load a model, except that the first call takes longer to complete. compute_type (`Dict`): Information about the compute resource the model is using or will use, such as 'gpu' type and number of replicas. framework (`str`): The name of the framework that the model was built with, such as 'transformers' or 'text-generation-inference'. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
12_6
[`InferenceAPI`] is the legacy way to call the Inference API. The interface is more simplistic and requires knowing the input parameters and output format for each task. It also lacks the ability to connect to other services like Inference Endpoints or AWS SageMaker. [`InferenceAPI`] will soon be deprecated so we recommend using [`InferenceClient`] whenever possible. Check out [this guide](../guides/inference#legacy-inferenceapi-client) to learn how to switch from [`InferenceAPI`] to [`InferenceClient`] in your scripts.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
12_7
```python Client to configure requests and make calls to the HuggingFace Inference API. Example: ```python >>> from huggingface_hub.inference_api import InferenceApi >>> # Mask-fill example >>> inference = InferenceApi("bert-base-uncased") >>> inference(inputs="The goal of life is [MASK].") [{'sequence': 'the goal of life is life.', 'score': 0.10933292657136917, 'token': 2166, 'token_str': 'life'}] >>> # Question Answering example >>> inference = InferenceApi("deepset/roberta-base-squad2") >>> inputs = { ... "question": "What's my name?", ... "context": "My name is Clara and I live in Berkeley.", ... } >>> inference(inputs) {'score': 0.9326569437980652, 'start': 11, 'end': 16, 'answer': 'Clara'} >>> # Zero-shot example >>> inference = InferenceApi("typeform/distilbert-base-uncased-mnli") >>> inputs = "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!" >>> params = {"candidate_labels": ["refund", "legal", "faq"]} >>> inference(inputs, params) {'sequence': 'Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!', 'labels': ['refund', 'faq', 'legal'], 'scores': [0.9378499388694763, 0.04914155602455139, 0.013008488342165947]} >>> # Overriding configured task >>> inference = InferenceApi("bert-base-uncased", task="feature-extraction") >>> # Text-to-image >>> inference = InferenceApi("stabilityai/stable-diffusion-2-1") >>> inference("cat") <PIL.PngImagePlugin.PngImageFile image (...)> >>> # Return as raw response to parse the output yourself >>> inference = InferenceApi("mio/amadeus") >>> response = inference("hello world", raw_response=True) >>> response.headers {"Content-Type": "audio/flac", ...} >>> response.content # raw bytes from server b'(...)' ``` ``` - __init__ - __call__ - all
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
12_8
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/hf_file_system.md
.md
13_0
The `HfFileSystem` class provides a pythonic file interface to the Hugging Face Hub based on [`fsspec`](https://filesystem-spec.readthedocs.io/en/latest/).
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/hf_file_system.md
.md
13_1
`HfFileSystem` is based on [fsspec](https://filesystem-spec.readthedocs.io/en/latest/), so it is compatible with most of the APIs that it offers. For more details, check out [our guide](../guides/hf_file_system) and fsspec's [API Reference](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem).
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/hf_file_system.md
.md
13_2
Error fetching docstring for huggingface_hub.HfFileSystem : No huggingface_hub attribute HfFileSystem - __init__ - all
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/hf_file_system.md
.md
13_3
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
14_0
The `Repository` class is a helper class that wraps `git` and `git-lfs` commands. It provides tooling adapted for managing repositories which can be very large. It is the recommended tool as soon as any `git` operation is involved, or when collaboration will be a point of focus with the repository itself.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
14_1
```python Helper class to wrap the git and git-lfs commands. The aim is to facilitate interacting with huggingface.co hosted model or dataset repos, though not a lot here (if any) is actually specific to huggingface.co. <Tip warning={true}> [`Repository`] is deprecated in favor of the http-based alternatives implemented in [`HfApi`]. Given its large adoption in legacy code, the complete removal of [`Repository`] will only happen in release `v1.0`. For more details, please read https://huggingface.co/docs/huggingface_hub/concepts/git_vs_http. </Tip> ``` - __init__ - current_branch - all
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
14_2
```python Check if the folder is the root or part of a git repository Args: folder (`str`): The folder in which to run the command. Returns: `bool`: `True` if the repository is part of a repository, `False` otherwise. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
14_3
```python Check if the folder is a local clone of the remote_url Args: folder (`str` or `Path`): The folder in which to run the command. remote_url (`str`): The url of a git repository. Returns: `bool`: `True` if the repository is a local clone of the remote repository specified, `False` otherwise. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
14_4
```python Check if the file passed is tracked with git-lfs. Args: filename (`str` or `Path`): The filename to check. Returns: `bool`: `True` if the file passed is tracked with git-lfs, `False` otherwise. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
14_5
```python Check if file is git-ignored. Supports nested .gitignore files. Args: filename (`str` or `Path`): The filename to check. Returns: `bool`: `True` if the file passed is ignored by `git`, `False` otherwise. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
14_6
```python Returns a list of filenames that are to be staged. Args: pattern (`str` or `Path`): The pattern of filenames to check. Put `.` to get all files. folder (`str` or `Path`): The folder in which to run the command. Returns: `List[str]`: List of files that are to be staged. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
14_7
```python Check if the current checked-out branch is tracked upstream. Args: folder (`str` or `Path`): The folder in which to run the command. Returns: `bool`: `True` if the current checked-out branch is tracked upstream, `False` otherwise. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
14_8
```python Check the number of commits that would be pushed upstream Args: folder (`str` or `Path`): The folder in which to run the command. upstream (`str`, *optional*): The name of the upstream repository with which the comparison should be made. Returns: `int`: Number of commits that would be pushed upstream were a `git push` to proceed. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
14_9
The `Repository` utility offers several methods which can be launched asynchronously: - `git_push` - `git_pull` - `push_to_hub` - The `commit` context manager See below for utilities to manage such asynchronous methods.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
14_10
```python Helper class to wrap the git and git-lfs commands. The aim is to facilitate interacting with huggingface.co hosted model or dataset repos, though not a lot here (if any) is actually specific to huggingface.co. <Tip warning={true}> [`Repository`] is deprecated in favor of the http-based alternatives implemented in [`HfApi`]. Given its large adoption in legacy code, the complete removal of [`Repository`] will only happen in release `v1.0`. For more details, please read https://huggingface.co/docs/huggingface_hub/concepts/git_vs_http. </Tip> ``` - commands_failed - commands_in_progress - wait_for_commands
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
14_11
```python Utility to follow commands launched asynchronously. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
14_12
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_0
The `huggingface_hub` package exposes a `logging` utility to control the logging level of the package itself. You can import it as such: ```py from huggingface_hub import logging ``` Then, you may define the verbosity in order to update the amount of logs you'll see: ```python from huggingface_hub import logging logging.set_verbosity_error() logging.set_verbosity_warning() logging.set_verbosity_info() logging.set_verbosity_debug() logging.set_verbosity(...) ``` The levels should be understood as follows: - `error`: only show critical logs about usage which may result in an error or unexpected behavior. - `warning`: show logs that aren't critical but usage may result in unintended behavior. Additionally, important informative logs may be shown. - `info`: show most logs, including some verbose logging regarding what is happening under the hood. If something is behaving in an unexpected manner, we recommend switching the verbosity level to this in order to get more information. - `debug`: show all logs, including some internal logs which may be used to track exactly what's happening under the hood.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_1
Error fetching docstring for logging.get_verbosity: module 'logging' has no attribute 'get_verbosity'
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_2
Error fetching docstring for logging.set_verbosity: module 'logging' has no attribute 'set_verbosity'
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_3
Error fetching docstring for logging.set_verbosity_info: module 'logging' has no attribute 'set_verbosity_info'
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_4
Error fetching docstring for logging.set_verbosity_debug: module 'logging' has no attribute 'set_verbosity_debug'
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_5
Error fetching docstring for logging.set_verbosity_warning: module 'logging' has no attribute 'set_verbosity_warning'
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_6
Error fetching docstring for logging.set_verbosity_error: module 'logging' has no attribute 'set_verbosity_error'
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_7
Error fetching docstring for logging.disable_propagation: module 'logging' has no attribute 'disable_propagation'
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_8
Error fetching docstring for logging.enable_propagation: module 'logging' has no attribute 'enable_propagation'
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_9
The methods exposed below are relevant when modifying modules from the `huggingface_hub` library itself. Using these shouldn't be necessary if you use `huggingface_hub` and you don't modify them.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_10
Error fetching docstring for logging.get_logger: module 'logging' has no attribute 'get_logger'
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_11
Progress bars are a useful tool to display information to the user while a long-running task is being executed (e.g. when downloading or uploading files). `huggingface_hub` exposes a [`~utils.tqdm`] wrapper to display progress bars in a consistent way across the library. By default, progress bars are enabled. You can disable them globally by setting `HF_HUB_DISABLE_PROGRESS_BARS` environment variable. You can also enable/disable them using [`~utils.enable_progress_bars`] and [`~utils.disable_progress_bars`]. If set, the environment variable has priority on the helpers. ```py >>> from huggingface_hub import snapshot_download >>> from huggingface_hub.utils import are_progress_bars_disabled, disable_progress_bars, enable_progress_bars >>> # Disable progress bars globally >>> disable_progress_bars() >>> # Progress bar will not be shown ! >>> snapshot_download("gpt2") >>> are_progress_bars_disabled() True >>> # Re-enable progress bars globally >>> enable_progress_bars() ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_12
You can also enable or disable progress bars for specific groups. This allows you to manage progress bar visibility more granularly within different parts of your application or library. When a progress bar is disabled for a group, all subgroups under it are also affected unless explicitly overridden. ```py # Disable progress bars for a specific group >>> disable_progress_bars("peft.foo") >>> assert not are_progress_bars_disabled("peft") >>> assert not are_progress_bars_disabled("peft.something") >>> assert are_progress_bars_disabled("peft.foo") >>> assert are_progress_bars_disabled("peft.foo.bar") # Re-enable progress bars for a subgroup >>> enable_progress_bars("peft.foo.bar") >>> assert are_progress_bars_disabled("peft.foo") >>> assert not are_progress_bars_disabled("peft.foo.bar") # Use groups with tqdm # No progress bar for `name="peft.foo"` >>> for _ in tqdm(range(5), name="peft.foo"): ... pass # Progress bar will be shown for `name="peft.foo.bar"` >>> for _ in tqdm(range(5), name="peft.foo.bar"): ... pass 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:00<00:00, 117817.53it/s] ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_13
```python Check if progress bars are disabled globally or for a specific group. This function returns whether progress bars are disabled for a given group or globally. It checks the `HF_HUB_DISABLE_PROGRESS_BARS` environment variable first, then the programmatic settings. Args: name (`str`, *optional*): The group name to check; if None, checks the global setting. Returns: `bool`: True if progress bars are disabled, False otherwise. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_14
```python Disable progress bars either globally or for a specified group. This function updates the state of progress bars based on a group name. If no group name is provided, all progress bars are disabled. The operation respects the `HF_HUB_DISABLE_PROGRESS_BARS` environment variable's setting. Args: name (`str`, *optional*): The name of the group for which to disable the progress bars. If None, progress bars are disabled globally. Raises: Warning: If the environment variable precludes changes. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_15
```python Enable progress bars either globally or for a specified group. This function sets the progress bars to enabled for the specified group or globally if no group is specified. The operation is subject to the `HF_HUB_DISABLE_PROGRESS_BARS` environment setting. Args: name (`str`, *optional*): The name of the group for which to enable the progress bars. If None, progress bars are enabled globally. Raises: Warning: If the environment variable precludes changes. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_16
In some environments, you might want to configure how HTTP calls are made, for example if you are using a proxy. `huggingface_hub` let you configure this globally using [`configure_http_backend`]. All requests made to the Hub will then use your settings. Under the hood, `huggingface_hub` uses `requests.Session` so you might want to refer to the [`requests` documentation](https://requests.readthedocs.io/en/latest/user/advanced) to learn more about the available parameters. Since `requests.Session` is not guaranteed to be thread-safe, `huggingface_hub` creates one session instance per thread. Using sessions allows us to keep the connection open between HTTP calls and ultimately save time. If you are integrating `huggingface_hub` in a third-party library and wants to make a custom call to the Hub, use [`get_session`] to get a Session configured by your users (i.e. replace any `requests.get(...)` call by `get_session().get(...)`).
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_17
```python Configure the HTTP backend by providing a `backend_factory`. Any HTTP calls made by `huggingface_hub` will use a Session object instantiated by this factory. This can be useful if you are running your scripts in a specific environment requiring custom configuration (e.g. custom proxy or certifications). Use [`get_session`] to get a configured Session. Since `requests.Session` is not guaranteed to be thread-safe, `huggingface_hub` creates 1 Session instance per thread. They are all instantiated using the same `backend_factory` set in [`configure_http_backend`]. A LRU cache is used to cache the created sessions (and connections) between calls. Max size is 128 to avoid memory leaks if thousands of threads are spawned. See [this issue](https://github.com/psf/requests/issues/2766) to know more about thread-safety in `requests`. Example: ```py import requests from huggingface_hub import configure_http_backend, get_session
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_18
def backend_factory() -> requests.Session: session = requests.Session() session.proxies = {"http": "http://10.10.1.10:3128", "https": "https://10.10.1.11:1080"} return session
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_19
configure_http_backend(backend_factory=backend_factory)
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_20
session = get_session() ``` ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_21
```python Get a `requests.Session` object, using the session factory from the user. Use [`get_session`] to get a configured Session. Since `requests.Session` is not guaranteed to be thread-safe, `huggingface_hub` creates 1 Session instance per thread. They are all instantiated using the same `backend_factory` set in [`configure_http_backend`]. A LRU cache is used to cache the created sessions (and connections) between calls. Max size is 128 to avoid memory leaks if thousands of threads are spawned. See [this issue](https://github.com/psf/requests/issues/2766) to know more about thread-safety in `requests`. Example: ```py import requests from huggingface_hub import configure_http_backend, get_session
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_22
def backend_factory() -> requests.Session: session = requests.Session() session.proxies = {"http": "http://10.10.1.10:3128", "https": "https://10.10.1.11:1080"} return session
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_23
configure_http_backend(backend_factory=backend_factory)
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_24
session = get_session() ``` ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_25
`huggingface_hub` defines its own HTTP errors to refine the `HTTPError` raised by `requests` with additional information sent back by the server.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_26
[`~utils.hf_raise_for_status`] is meant to be the central method to "raise for status" from any request made to the Hub. It wraps the base `requests.raise_for_status` to provide additional information. Any `HTTPError` thrown is converted into a `HfHubHTTPError`. ```py import requests from huggingface_hub.utils import hf_raise_for_status, HfHubHTTPError response = requests.post(...) try: hf_raise_for_status(response) except HfHubHTTPError as e: print(str(e)) # formatted message e.request_id, e.server_message # details returned by server # Complete the error message with additional information once it's raised e.append_to_message("\n`create_commit` expects the repository to exist.") raise ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_27
```python Internal version of `response.raise_for_status()` that will refine a potential HTTPError. Raised exception will be an instance of `HfHubHTTPError`. This helper is meant to be the unique method to raise_for_status when making a call to the Hugging Face Hub. Example: ```py import requests from huggingface_hub.utils import get_session, hf_raise_for_status, HfHubHTTPError response = get_session().post(...) try: hf_raise_for_status(response) except HfHubHTTPError as e: print(str(e)) # formatted message e.request_id, e.server_message # details returned by server
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_28
e.append_to_message(" `create_commit` expects the repository to exist.") raise ``` Args: response (`Response`): Response from the server. endpoint_name (`str`, *optional*): Name of the endpoint that has been called. If provided, the error message will be more complete. <Tip warning={true}> Raises when the request has failed: - [`~utils.RepositoryNotFoundError`] If the repository to download from cannot be found. This may be because it doesn't exist, because `repo_type` is not set correctly, or because the repo is `private` and you do not have access. - [`~utils.GatedRepoError`] If the repository exists but is gated and the user is not on the authorized list. - [`~utils.RevisionNotFoundError`] If the repository exists but the revision couldn't be find. - [`~utils.EntryNotFoundError`] If the repository exists but the entry (e.g. the requested file) couldn't be find. - [`~utils.BadRequestError`] If request failed with a HTTP 400 BadRequest error. - [`~utils.HfHubHTTPError`] If request failed for a reason not listed above. </Tip> ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_29
Here is a list of HTTP errors thrown in `huggingface_hub`.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_30
`HfHubHTTPError` is the parent class for any HF Hub HTTP error. It takes care of parsing the server response and format the error message to provide as much information to the user as possible.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_31
```python HTTPError to inherit from for any custom HTTP Error raised in HF Hub. Any HTTPError is converted at least into a `HfHubHTTPError`. If some information is sent back by the server, it will be added to the error message. Added details: - Request id from "X-Request-Id" header if exists. If not, fallback to "X-Amzn-Trace-Id" header if exists. - Server error message from the header "X-Error-Message". - Server error message if we can found one in the response body. Example: ```py import requests from huggingface_hub.utils import get_session, hf_raise_for_status, HfHubHTTPError response = get_session().post(...) try: hf_raise_for_status(response) except HfHubHTTPError as e: print(str(e)) # formatted message e.request_id, e.server_message # details returned by server
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_32
e.append_to_message(" `create_commit` expects the repository to exist.") raise ``` ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_33
```python Raised when trying to access a hf.co URL with an invalid repository name, or with a private repo name the user does not have access to. Example: ```py >>> from huggingface_hub import model_info >>> model_info("<non_existent_repository>") (...) huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: PvMw_VjBMjVdMz53WKIzP) Repository Not Found for url: https://huggingface.co/api/models/%3Cnon_existent_repository%3E. Please make sure you specified the correct `repo_id` and `repo_type`. If the repo is private, make sure you are authenticated. Invalid username or password. ``` ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_34
```python Raised when trying to access a gated repository for which the user is not on the authorized list. Note: derives from `RepositoryNotFoundError` to ensure backward compatibility. Example: ```py >>> from huggingface_hub import model_info >>> model_info("<gated_repository>") (...) huggingface_hub.utils._errors.GatedRepoError: 403 Client Error. (Request ID: ViT1Bf7O_026LGSQuVqfa) Cannot access gated repo for url https://huggingface.co/api/models/ardent-figment/gated-model. Access to model ardent-figment/gated-model is restricted and you are not in the authorized list. Visit https://huggingface.co/ardent-figment/gated-model to ask for access. ``` ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_35
```python Raised when trying to access a hf.co URL with a valid repository but an invalid revision. Example: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download('bert-base-cased', 'config.json', revision='<non-existent-revision>') (...) huggingface_hub.utils._errors.RevisionNotFoundError: 404 Client Error. (Request ID: Mwhe_c3Kt650GcdKEFomX) Revision Not Found for url: https://huggingface.co/bert-base-cased/resolve/%3Cnon-existent-revision%3E/config.json. ``` ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_36
```python Raised when trying to access a hf.co URL with a valid repository and revision but an invalid filename. Example: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download('bert-base-cased', '<non-existent-file>') (...) huggingface_hub.utils._errors.EntryNotFoundError: 404 Client Error. (Request ID: 53pNl6M0MxsnG5Sw8JA6x) Entry Not Found for url: https://huggingface.co/bert-base-cased/resolve/main/%3Cnon-existent-file%3E. ``` ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
15_37