Repositories on the Hugging Face Hub are unique to those on software development platforms. While both leverage the benefits of modern version control with the support of Git, Hub repositories often contain files considerably different files from those used to build traditional software.
They are:
- Large - in the range of GB or TB
- Binary - not in a human readable format by default (e.g., Safetensors or Parquet)
To manage these in a Git repository has traditionally meant using Git LFS, a Git extension.
Git LFS is utilized when working files larger than 10MB or whose extensions are present in a .gitattributes file:
![ ADD IMAGE OF .gitattributes here ]
Instead of storing these alongside the rest of the content in the repository, Git LFS routes the content in a remote storage designed for large objects.
Git LFS then creates a "pointer file" which is stored in the repository for the given revision:
Example from a Hub repository
The fields in a pointer file that you will see on the Hub are:
- SHA256: Provides a unique identifier for the actual large file. This identifier is generated by computing the SHA-256 hash of the file’s contents.
- Pointer size: The size of the pointer file stored in the Git repository.
- Size of the remote file: Indicates the size of the actual large file in bytes. This metadata is useful for both verification purposes and for managing storage and transfer operations.
As you can see, the pointer file size is much smaller than the remote file, allowing the repository itself to remain small. This is especially important when working with a repository using Git, as only remote files at the specific commit are transferred instead of each revision of the remote file.
The Hub’s Git LFS backend is Amazon Simple Storage Service (S3). When Git LFS is invoked, it stores the file contents in S3 using the SHA hash to name the file for future access. This storage architecture is relatively simple and has allowed Hub to store millions of models, datasets, and spaces repositories’ files (45PB total as of this writing).
The main limitation of LFS is its file-centric approach to deduplication. Any change to a file, irrespective of how large of small that change is, means the entire file is versioned - incurring significant overheads in file transfers as the entire file is uploaded (if committing to a repository) or downloaded (if pulling the latest version to your machine).
This leads to a worse developer experience along with a proliferation of additional storage.
In August 2024 Hugging Face acquired XetHub, a seed-stage started based in Seattle, to replace LFS on the Hub.
Like LFS, a Xet-backed repository utilizes S3 as the remote storage and stores pointer files in the repository.
Xet pointer files are nearly identical to LFS pointer files with the addition of a Xet backed hash field that is used for referencing the file in Xet storage.
Unlike LFS, Xet-enabled repositories utilize content defined chunking (CDC) to deduplicate on the level of bytes (~64KB of data) for the large binary files found in Model and Dataset repositories. When a file is uploaded to a Xet-backed repository, it's contents are broken down into these variable-sized chunks. New chunks are grouped together in 64MB blocks and uploaded while previously seen chunks are discarded.
The Hub's current recommendation is to limit files to 20GB. At a 64KB chunk size, a 20GB file has 312,500 chunks, many of which go unchanged from version to version. Git LFS is designed to only notice that a file has changed and store the entirety of that revision. By deduplicating at the level of chunks, the Xet backend enables storing only the modified content in a file (which might only be a few chunks) and securely deduplicates shared blocks across repositories.
Supporting this requires coordination between the storage layer and the local machine interacting with the repository (and all the systems in-between). There are 4 primary components to the Xet architecture:
- Client
- Hugging Face Hub
- Content addressed store (CAS)
- Amazon S3
![IMAGE OF XET ARCHITECTURE]
The client represents whatever machine is uploading or downloading a file. Current support is limited to the Python package, hf_xet, which provides an integration with the huggingface_hub and Xet-backed repositories.
When uploading files to Hub, hf_xet chunks the files into immutable content-defined chunks and deduplicates - ignoring previously seen chunks and only uploading new ones.
On the download path, hf_xet communicates with CAS to get the reconstruction information for a file. This information is compared against the local chunk cache so that hf_xet only issues requests for uncached chunks.
The Hub backend manages the Git repository, authentication & authorization, and metadata about both the files and repository. The Hub communicates with the client and CAS.
The content addressed store (CAS) is more than just a store - it is set of services that exposes APIs for supporting uploading and downloading Xet-backed files with a key-value store (DynamoDb) mapping hashed content and metadata to its location in S3.
The primary APIs are used for:
- Uploading blocks: Verifies the contents of the uploaded blocks, and then writes them to the appropriate S3 bucket.
- Uploading shards: Verifies the contents of the uploaded shards, writes them to the appropriate S3 bucket, and registers the shard in CAS
- Downloading file reconstruction information: Given the
Xet backed hashfield from a pointer file organize the manifest necessary to rebuild the file. Return the manifest to the client for direct download from S3 using presigned URLs for the relevant blocks to download. - Check storage location: Given the
LFS SHA256 hashthis returns if Xet or LFS manages the content. This is a critical part of migration & compatibility with the legacy LFS storage system. - LFS Bridge: Allows repositories using Xet storage to be accessed by legacy non-Xet-aware clients. The Bridge mimics an LFS server but does the work of reconstructing the requested file and returning it to the client. This allows downloading files through a single URL (so you can use tools like
curlof the web interface of the Hub to download files).
S3 stores the blocks and shards. It provides resiliency, availability, and fast access leveraging Cloudfront as a CDN.
Xet Storage provides a seamless transition for existing Hub repositories. It isn’t necessary to know if the Xet backend is involved at all. Xet-backed repositories continue to use the LFS pointer file format, with only the addition of the Xet backed hash field. Meaning, existing repos and newly created repos will not look any different if you do a bare clone of them. Each of the large files (or binary files) will continue to have a pointer file and matches the Git LFS pointer file specification.
This symmetry allows non-Xet-enabled clients (e.g., older versions of the huggingface_hub that are not Xet-aware) to interact with Xet-backed repositories without concern. In fact, within a repository a mixture of LFS and Xet backed files are supported. As noted in the section describing the CAS APIs, the Xet backend indicates whether a file is in LFS or Xet storage, allowing downstream services (LFS or the LFS bridge) to provide the proper URL to S3, regardless of which storage system holds the content.
While a Xet-aware client will receive file reconstruction information from CAS to download the Xet-backed locally, a legacy client will get a S3 URL from the LFS bridge. Meanwhile, while uploading an update to a Xet-backed file, a Xet-aware client will run CDC deduplication and upload through CAS while a non-Xet-aware client will upload through LFS and a background process will convert the file revision to a Xet-backed version.