Hub documentation
Storage
Storage
Repositories on the Hugging Face Hub are different from those on software development platforms. They contain files that are:
- Large - model or dataset files are in the range of GB and above. We have a few TB-scale files!
- Binary - not in a human readable format by default (e.g., Safetensors or Parquet)
While the Hub leverages modern version control with the support of Git, these differences make Model and Dataset repositories quite different from those that contain only source code.
Storing these files directly in a Git repository is impractical. Not only are the typical storage systems behind Git repositories unsuited for such files, but when you clone a repository, Git retrieves the entire history, including all file revisions. This can be prohibitively large for massive binaries, forcing you to download gigabytes of historic data you may never need.
Instead, on the Hub, these large files are tracked using “pointer files” and identified through a .gitattributes
file (both discussed in more detail below), which remain in the Git repository while the actual data is stored in remote storage (like Amazon S3). As a result, the repository stays small and typical Git workflows remain efficient.
Historically, Hub repositories have relied on Git LFS for this mechanism. While Git LFS remains supported and widely used (see the Legacy section below), the Hub is introducing a modern custom storage system built specifically for AI/ML development, enabling chunk-level deduplication, smaller uploads, and faster downloads than Git LFS.
Xet
In August 2024 Hugging Face acquired XetHub, a seed-stage startup based in Seattle, to replace Git LFS on the Hub.
Like Git LFS, a Xet-backed repository utilizes S3 as the remote storage with a .gitattributes
file at the repository root helping identify what files should be stored remotely.


Meanwhile, a Git LFS pointer file provide metadata to locate the actual file contents in remote storage:
- SHA256: Provides a unique identifier for the actual large file. This identifier is generated by computing the SHA-256 hash of the file’s contents.
- Pointer size: The size of the pointer file stored in the Git repository.
- Size of the remote file: Indicates the size of the actual large file in bytes. This metadata is useful for both verification purposes and for managing storage and transfer operations.
A Xet pointer includes all of this information by design. Refer to the section on backwards compatibility with Git LFS with the addition of a Xet backed hash
field for referencing the file in Xet storage.


Unlike Git LFS, which deduplicates at the file level, Xet-enabled repositories deduplicate at the level of bytes. When a file backed by Xet storage is updated, only the modified data is uploaded to remote storage, significantly saving on network transfers. For many workflows, like incremental updates to model checkpoints or appending/inserting new data into a dataset, this improves iteration speed for yourself and your collaborators. To learn more about deduplication in Xet storage, refer to the Deduplication section below.
Using Xet Storage
To start using Xet Storage, you need a Xet-enabled repository and a Xet-aware version of the huggingface_hub Python library.
To make Xet the default for all your repositories, join the waitlist! You can apply for yourself or your entire organization (requires admin permissions). Once approved, all current repositories will be automatically migrated to Xet and future repositories will be Xet-enabled by default.
To access a Xet-aware client, add the hf_xet
Python package when installing huggingface_hub
:
pip install -U huggingface_hub[hf_xet]
If you use the transformers
or datasets
libraries, it’s already using huggingface_hub
so you can simply install hf_xet
in the same env:
pip install hf-xet
If your Python environment has a hf_xet
-aware version of huggingface_hub
then your uploads and downloads will automatically use Xet.
That’s it! You now get the benefits of Xet deduplication for both uploads and downloads. Team members using older huggingface_hub
versions will still be able to upload and download repositories through the backwards compatibility provided by the LFS bridge.
To see more detailed usage docs, refer to the huggingface_hub
docs for:
Recommendations
Xet integrates seamlessly with the Hub’s current Python-based workflows. However, there are a few steps you may consider to get the most benefits from Xet storage:
- Use
hf_xet
: While Xet remains backward compatible with legacy clients optimized for Git LFS, thehf_xet
integration withhuggingface_hub
delivers optimal chunk-based performance and faster iteration on large files. - Leverage frequent, incremental commits: Xet’s chunk-level deduplication means you can safely make incremental updates to models or datasets. Only changed chunks are uploaded, so frequent commits are both fast and storage-efficient.
- Be Specific in .gitattributes: When defining patterns for Xet or LFS, use precise file extensions (e.g.,
*.safetensors
,*.bin
) to avoid unnecessarily routing smaller files through large-file storage. - Prioritize community access: Xet substantially increases the efficiency and scale of large file transfers. Instead of structuring your repository to reduce its total size (or the size of individual files), organize it for collaborators and community users so they may easily navigate and retrieve the content they need.
Current Limitations
While Xet brings fine-grained deduplication and enhanced performance to Git-based storage, some features and platform compatibilities are still in development. As a result, keep the following constraints in mind when working with a Xet-enabled repository:
- 64-bit systems only: The hf_xet client currently requires a 64-bit architecture; 32-bit systems are not supported.
- Partial JavaScript library support: The huggingface.js library has limited functionality with Xet-backed repositories; additional coverage is planned in future releases.
- Full web support currently unavailable: Full support for chunked uploads via the Hub web interface remains under development.
- Git client integration (git-xet): Planned but remains under development.
Deduplication
Xet-enabled repositories utilize content-defined chunking (CDC) to deduplicate on the level of bytes (~64KB of data, also referred to as a “chunk”). Each chunk is identified by a rolling hash that determines chunk boundaries based on the actual file contents, making it resilient to insertions or deletions anywhere in the file. When a file is uploaded to a Xet-backed repository using a Xet-aware client, its contents are broken down into these variable-sized chunks. Only new chunks not already present in Xet storage are kept after chunking, everything else is discarded.
To avoid the overhead of communicating and managing at the level of chunks, new chunks are grouped together in 64MB blocks and uploaded. Each block is stored once in a content-addressed store (CAS), keyed by its hash.
The Hub’s current recommendation is to limit files to 20GB. At a 64KB chunk size, a 20GB file has 312,500 chunks, many of which go unchanged from version to version. Git LFS is designed to notice only that a file has changed and store the entirety of that revision. By deduplicating at the level of chunks, the Xet backend enables storing only the modified content in a file (which might only be a few KB or MB) and securely deduplicates shared blocks across repositories. For the large binary files found in Model and Dataset repositories, this provides significant improvements to file transfer times.
For more details, refer to the From Files to Chunks and From Chunks to Blocks blog posts, or the Git is for Data paper by Low et al. that served as the launch point for XetHub prior to being acquired by Hugging Face.
Backward Compatibility with LFS
Xet storage provides a seamless transition for existing Hub repositories. It isn’t necessary to know if the Xet backend is involved at all. Xet-backed repositories continue to use the Git LFS pointer file format, with only the addition of the Xet backed hash
field. Meaning, existing repos and newly created repos will not look any different if you do a bare clone
of them. Each of the large files (or binary files) will continue to have a pointer file and matches the Git LFS pointer file specification.
This symmetry allows non-Xet-aware clients (e.g., older versions of the huggingface_hub
that are not Xet-aware) to interact with Xet-backed repositories without concern. In fact, within a repository a mixture of Git LFS and Xet backed files are supported. As noted in the section describing the CAS APIs, the Xet backend indicates whether a file is in Git LFS or Xet storage, allowing downstream services (Git LFS or the Git LFS bridge) to provide the proper URL to S3, regardless of which storage system holds the content.
While a Xet-aware client will receive file reconstruction information from CAS to download the Xet-backed locally, a legacy client will get a S3 URL from the Git LFS bridge. Meanwhile, while uploading an update to a Xet-backed file, a Xet-aware client will run CDC deduplication and upload through CAS while a non-Xet-aware client will upload through Git LFS and a background process will convert the file revision to a Xet-backed version.
Security Model
Xet storage provides data deduplication over all chunks stored in Hugging Face. This is done via cryptographic hashing in a privacy sensitive way. The contents of chunks are protected and are associated with repository permissions. i.e. you can only read chunks which are required to reproduce files you have access to, and no more. See xet-core for details.
Legacy Storage: Git LFS
The legacy storage system on the Hub, Git LFS utilizes many of the same conventions as Xet-backed repositories. The Hub’s Git LFS backend is Amazon Simple Storage Service (S3). When Git LFS is invoked, it stores the file contents in S3 using the SHA hash to name the file for future access. This storage architecture is relatively simple and has allowed Hub to store millions of models, datasets, and spaces repositories’ files (45PB total as of this writing).
The primary limitation of Git LFS is its file-centric approach to deduplication. Any change to a file, irrespective of how large of small that change is, means the entire file is versioned - incurring significant overheads in file transfers as the entire file is uploaded (if committing to a repository) or downloaded (if pulling the latest version to your machine).
This leads to a worse developer experience along with a proliferation of additional storage.
< > Update on GitHub